gpt2.rst 5.85 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
.. 
    Copyright 2020 The HuggingFace Team. All rights reserved.

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
    the License. You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
    an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
    specific language governing permissions and limitations under the License.

13
OpenAI GPT2
Sylvain Gugger's avatar
Sylvain Gugger committed
14
-----------------------------------------------------------------------------------------------------------------------
15

16
Overview
Sylvain Gugger's avatar
Sylvain Gugger committed
17
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
18

Sylvain Gugger's avatar
Sylvain Gugger committed
19
OpenAI GPT-2 model was proposed in `Language Models are Unsupervised Multitask Learners
Sylvain Gugger's avatar
Sylvain Gugger committed
20
21
22
<https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf>`_ by Alec
Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. It's a causal (unidirectional)
transformer pretrained using language modeling on a very large corpus of ~40 GB of text data.
23
24
25

The abstract from the paper is the following:

Sylvain Gugger's avatar
Sylvain Gugger committed
26
27
28
29
30
*GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million
web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some
text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks
across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than
10X the amount of data.*
31
32
33

Tips:

Sylvain Gugger's avatar
Sylvain Gugger committed
34
35
- GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
  the left.
36
- GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
Sylvain Gugger's avatar
Sylvain Gugger committed
37
38
  token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be
  observed in the `run_generation.py` example script.
39
- The PyTorch models can take the `past` as input, which is the previously computed key/value attention pairs. Using
Sylvain Gugger's avatar
Sylvain Gugger committed
40
41
42
  this `past` value prevents the model from re-computing pre-computed values in the context of text generation. See
  `reusing the past in generative models <../quickstart.html#using-the-past>`__ for more information on the usage of
  this argument.
43

Lysandre's avatar
TF GPT2  
Lysandre committed
44
45
`Write With Transformer <https://transformer.huggingface.co/doc/gpt2-large>`__ is a webapp created and hosted by
Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five
Sylvain Gugger's avatar
Sylvain Gugger committed
46
different sizes: small, medium, large, xl and a distilled version of the small checkpoint: `distilgpt-2`.
Lysandre's avatar
TF GPT2  
Lysandre committed
47

Sylvain Gugger's avatar
Sylvain Gugger committed
48
The original code can be found `here <https://openai.com/blog/better-language-models/>`__.
49

50

Lysandre's avatar
Lysandre committed
51
GPT2Config
Sylvain Gugger's avatar
Sylvain Gugger committed
52
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
53

54
.. autoclass:: transformers.GPT2Config
55
    :members:
56
57


Lysandre's avatar
Lysandre committed
58
GPT2Tokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
59
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
60

61
.. autoclass:: transformers.GPT2Tokenizer
Lysandre Debut's avatar
Lysandre Debut committed
62
    :members: save_vocabulary
63
64


65
GPT2TokenizerFast
Sylvain Gugger's avatar
Sylvain Gugger committed
66
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
67
68
69
70
71

.. autoclass:: transformers.GPT2TokenizerFast
    :members:


72
GPT2 specific outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
73
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
74

Sylvain Gugger's avatar
Sylvain Gugger committed
75
.. autoclass:: transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput
76
77
    :members:

Sylvain Gugger's avatar
Sylvain Gugger committed
78
.. autoclass:: transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput
Sylvain Gugger's avatar
Sylvain Gugger committed
79
80
    :members:

81

Lysandre's avatar
Lysandre committed
82
GPT2Model
Sylvain Gugger's avatar
Sylvain Gugger committed
83
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
84

85
.. autoclass:: transformers.GPT2Model
86
    :members: forward, parallelize, deparallelize
87
88


Lysandre's avatar
Lysandre committed
89
GPT2LMHeadModel
Sylvain Gugger's avatar
Sylvain Gugger committed
90
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
91

92
.. autoclass:: transformers.GPT2LMHeadModel
93
    :members: forward, parallelize, deparallelize
94
95


Lysandre's avatar
Lysandre committed
96
GPT2DoubleHeadsModel
Sylvain Gugger's avatar
Sylvain Gugger committed
97
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98

99
.. autoclass:: transformers.GPT2DoubleHeadsModel
Sylvain Gugger's avatar
Sylvain Gugger committed
100
    :members: forward
LysandreJik's avatar
LysandreJik committed
101
102


103
104
105
106
107
108
109
GPT2ForSequenceClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.GPT2ForSequenceClassification
    :members: forward


Lysandre's avatar
Lysandre committed
110
TFGPT2Model
Sylvain Gugger's avatar
Sylvain Gugger committed
111
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LysandreJik's avatar
LysandreJik committed
112

113
.. autoclass:: transformers.TFGPT2Model
Sylvain Gugger's avatar
Sylvain Gugger committed
114
    :members: call
LysandreJik's avatar
LysandreJik committed
115
116


Lysandre's avatar
Lysandre committed
117
TFGPT2LMHeadModel
Sylvain Gugger's avatar
Sylvain Gugger committed
118
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LysandreJik's avatar
LysandreJik committed
119

120
.. autoclass:: transformers.TFGPT2LMHeadModel
Sylvain Gugger's avatar
Sylvain Gugger committed
121
    :members: call
LysandreJik's avatar
LysandreJik committed
122
123


Lysandre's avatar
Lysandre committed
124
TFGPT2DoubleHeadsModel
Sylvain Gugger's avatar
Sylvain Gugger committed
125
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LysandreJik's avatar
LysandreJik committed
126

127
.. autoclass:: transformers.TFGPT2DoubleHeadsModel
Sylvain Gugger's avatar
Sylvain Gugger committed
128
    :members: call
129
130
131
132
133
134
135
136
137
138
139
140

TFGPT2ForSequenceClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.TFGPT2ForSequenceClassification
    :members: call

TFSequenceClassifierOutputWithPast
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast
    :members: