Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
bd0d3fd7
Commit
bd0d3fd7
authored
Jan 16, 2020
by
Lysandre
Committed by
Lysandre Debut
Jan 23, 2020
Browse files
GPT-2 PyTorch models + better tips for BERT
parent
dbeb7fb4
Changes
3
Expand all
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
196 additions
and
146 deletions
+196
-146
docs/source/model_doc/bert.rst
docs/source/model_doc/bert.rst
+7
-1
docs/source/model_doc/gpt2.rst
docs/source/model_doc/gpt2.rst
+30
-0
src/transformers/modeling_gpt2.py
src/transformers/modeling_gpt2.py
+159
-145
No files found.
docs/source/model_doc/bert.rst
View file @
bd0d3fd7
...
...
@@ -27,7 +27,13 @@ Tips:
- BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on
the right rather than the left.
- BERT was trained with a masked language modeling (MLM) objective. It is therefore efficient at predicting masked
tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language
modeling (CLM) objective are better in that regard.
- Alongside MLM, BERT was trained using a next sentence prediction (NSP) objective using the [CLS] token as a sequence
approximate. The user may use this token (the first token in a sequence built with special tokens) to get a sequence
prediction rather than a token prediction. However, averaging over the sequence may yield better results than using
the [CLS] token.
BertConfig
~~~~~~~~~~~~~~~~~~~~~
...
...
docs/source/model_doc/gpt2.rst
View file @
bd0d3fd7
OpenAI GPT2
----------------------------------------------------
Overview
~~~~~~~~~~~~~~~~~~~~~
OpenAI GPT-2 model was proposed in
`Language Models are Unsupervised Multitask Learners`_
by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
It's a causal (unidirectional) transformer pre-trained using language modeling on a very large
corpus of ~40 GB of text data.
The abstract from the paper is the following:
*GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1]
of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous
words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring
demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X
the parameters and trained on more than 10X the amount of data.*
Tips:
- GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on
the right rather than the left.
- GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next
token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as
it can be observed in the `run_generation.py` example script.
- The PyTorch models can take the `past` as input, which is the previously computed key/value attention pairs. Using
this `past` value prevents the model from re-computing pre-computed values in the context of text generation.
See `reusing the past in generative models <../quickstart.html#using-the-past>`_ for more information on the usage
of this argument.
``GPT2Config``
~~~~~~~~~~~~~~~~~~~~~
...
...
src/transformers/modeling_gpt2.py
View file @
bd0d3fd7
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment