README.md 44 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
2
<p align="center">
    <br>
thomwolf's avatar
thomwolf committed
3
    <img src="https://raw.githubusercontent.com/huggingface/transformers/master/docs/source/imgs/transformers_logo_name.png" width="400"/>
thomwolf's avatar
thomwolf committed
4
5
6
    <br>
<p>
<p align="center">
Lysandre Debut's avatar
Lysandre Debut committed
7
    <a href="https://circleci.com/gh/huggingface/transformers">
thomwolf's avatar
thomwolf committed
8
        <img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/master">
thomwolf's avatar
thomwolf committed
9
10
    </a>
    <a href="https://github.com/huggingface/transformers/blob/master/LICENSE">
thomwolf's avatar
thomwolf committed
11
        <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
thomwolf's avatar
thomwolf committed
12
13
    </a>
    <a href="https://huggingface.co/transformers/index.html">
thomwolf's avatar
thomwolf committed
14
        <img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/transformers/index.html.svg?down_color=red&down_message=offline&up_message=online">
thomwolf's avatar
thomwolf committed
15
16
    </a>
    <a href="https://github.com/huggingface/transformers/releases">
thomwolf's avatar
thomwolf committed
17
        <img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
thomwolf's avatar
thomwolf committed
18
19
20
    </a>
</p>

thomwolf's avatar
thomwolf committed
21
<h3 align="center">
22
<p>State-of-the-art Natural Language Processing for PyTorch and TensorFlow 2.0
thomwolf's avatar
thomwolf committed
23
</h3>
thomwolf's avatar
thomwolf committed
24

25
🤗 Transformers (formerly known as `pytorch-transformers` and `pytorch-pretrained-bert`) provides state-of-the-art general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5, CTRL...) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over thousands of pretrained models in 100+ languages and deep interoperability between PyTorch & TensorFlow 2.0.
thomwolf's avatar
thomwolf committed
26

27
### Recent contributors
Clement's avatar
Clement committed
28
29
[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/0)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/0)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/1)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/1)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/2)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/2)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/3)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/3)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/4)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/4)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/5)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/5)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/6)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/6)[![](https://sourcerer.io/fame/clmnt/huggingface/transformers/images/7)](https://sourcerer.io/fame/clmnt/huggingface/transformers/links/7)

thomwolf's avatar
thomwolf committed
30
### Features
thomwolf's avatar
thomwolf committed
31
32
33
34
35
36
37
38
39
40
41
- High performance on NLU and NLG tasks
- Low barrier to entry for educators and practitioners

State-of-the-art NLP for everyone
- Deep learning researchers
- Hands-on practitioners
- AI/ML/NLP teachers and educators

Lower compute costs, smaller carbon footprint
- Researchers can share trained models instead of always retraining
- Practitioners can reduce compute time and production costs
42
- Dozens of architectures with over 1,000 pretrained models, some in more than 100 languages
thomwolf's avatar
thomwolf committed
43
44
45

Choose the right framework for every part of a model's lifetime
- Train state-of-the-art models in 3 lines of code
thomwolf's avatar
thomwolf committed
46
47
- Deep interoperability between TensorFlow 2.0 and PyTorch models
- Move a single model between TF2.0/PyTorch frameworks at will
thomwolf's avatar
thomwolf committed
48
- Seamlessly pick the right framework for training, evaluation, production
Julien Chaumond's avatar
Julien Chaumond committed
49

thomwolf's avatar
indeed  
thomwolf committed
50

thomwolf's avatar
thomwolf committed
51
52
53
| Section | Description |
|-|-|
| [Installation](#installation) | How to install the package |
thomwolf's avatar
thomwolf committed
54
| [Model architectures](#model-architectures) | Architectures (with pretrained weights) |
thomwolf's avatar
thomwolf committed
55
56
| [Online demo](#online-demo) | Experimenting with this repo’s text generation capabilities |
| [Quick tour: Usage](#quick-tour) | Tokenizers & models usage: Bert and GPT-2 |
wangfei's avatar
wangfei committed
57
| [Quick tour: TF 2.0 and PyTorch ](#Quick-tour-TF-20-training-and-PyTorch-interoperability) | Train a TF 2.0 model in 10 lines of code, load it in PyTorch |
thomwolf's avatar
thomwolf committed
58
| [Quick tour: pipelines](#quick-tour-of-pipelines) | Using Pipelines: Wrapper around tokenizer and models to use finetuned models |
thomwolf's avatar
thomwolf committed
59
| [Quick tour: Fine-tuning/usage scripts](#quick-tour-of-the-fine-tuningusage-scripts) | Using provided scripts: GLUE, SQuAD and Text generation |
60
| [Quick tour: Share your models ](#Quick-tour-of-model-sharing) | Upload and share your fine-tuned models with the community |
61
| [Migrating from pytorch-transformers to transformers](#Migrating-from-pytorch-transformers-to-transformers) | Migrating your code from pytorch-transformers to transformers |
thomwolf's avatar
thomwolf committed
62
| [Migrating from pytorch-pretrained-bert to pytorch-transformers](#Migrating-from-pytorch-pretrained-bert-to-transformers) | Migrating your code from pytorch-pretrained-bert to transformers |
63
| [Documentation](https://huggingface.co/transformers/) | Full API documentation and more |
thomwolf's avatar
thomwolf committed
64

thomwolf's avatar
thomwolf committed
65
## Installation
VictorSanh's avatar
VictorSanh committed
66

67
This repo is tested on Python 3.6+, PyTorch 1.0.0+ (PyTorch 1.3.1+ for examples) and TensorFlow 2.0.
VictorSanh's avatar
VictorSanh committed
68

69
70
71
72
73
74
You should install 🤗 Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, check out the [user guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).

Create a virtual environment with the version of Python you're going to use and activate it.

Now, if you want to use 🤗 Transformers, you can install it with pip. If you'd like to play with the examples, you must install it from source.

thomwolf's avatar
thomwolf committed
75
### With pip
thomwolf's avatar
thomwolf committed
76

77
First you need to install one of, or both, TensorFlow 2.0 and PyTorch.
Christopher Goh's avatar
Christopher Goh committed
78
Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/pip#tensorflow-2.0-rc-is-available) and/or [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform.
79
80

When TensorFlow 2.0 and/or PyTorch has been installed, 🤗 Transformers can be installed using pip as follows:
thomwolf's avatar
thomwolf committed
81

thomwolf's avatar
thomwolf committed
82
```bash
83
pip install transformers
thomwolf's avatar
thomwolf committed
84
```
VictorSanh's avatar
VictorSanh committed
85

thomwolf's avatar
thomwolf committed
86
### From source
thomwolf's avatar
thomwolf committed
87

88
Here also, you first need to install one of, or both, TensorFlow 2.0 and PyTorch.
Christopher Goh's avatar
Christopher Goh committed
89
Please refer to [TensorFlow installation page](https://www.tensorflow.org/install/pip#tensorflow-2.0-rc-is-available) and/or [PyTorch installation page](https://pytorch.org/get-started/locally/#start-locally) regarding the specific install command for your platform.
90

DenysNahurnyi's avatar
DenysNahurnyi committed
91
When TensorFlow 2.0 and/or PyTorch has been installed, you can install from source by cloning the repository and running:
thomwolf's avatar
thomwolf committed
92

thomwolf's avatar
thomwolf committed
93
```bash
94
95
git clone https://github.com/huggingface/transformers
cd transformers
96
pip install .
thomwolf's avatar
thomwolf committed
97
```
VictorSanh's avatar
VictorSanh committed
98

99
100
101
102
When you update the repository, you should upgrade the transformers installation and its dependencies as follows:

```bash
git pull
103
pip install --upgrade .
104
105
```

106
107
108
109
### Run the examples

Examples are included in the repository but are not shipped with the library.

110
Therefore, in order to run the latest versions of the examples, you need to install from source, as described above.
111

112
Look at the [README](https://github.com/huggingface/transformers/blob/master/examples/README.md) for how to run examples.
thomwolf's avatar
thomwolf committed
113

114
### Tests
thomwolf's avatar
thomwolf committed
115

116
A series of tests are included for the library and for some example scripts. Library tests can be found in the [tests folder](https://github.com/huggingface/transformers/tree/master/tests) and examples tests in the [examples folder](https://github.com/huggingface/transformers/tree/master/examples).
thomwolf's avatar
thomwolf committed
117

118
119
Depending on which framework is installed (TensorFlow 2.0 and/or PyTorch), the irrelevant tests will be skipped. Ensure that both frameworks are installed if you want to execute all tests.

120
Here's the easiest way to run tests for the library:
thomwolf's avatar
thomwolf committed
121

122
```bash
123
pip install -e ".[testing]"
124
make test
125
126
```

127
and for the examples:
128

thomwolf's avatar
thomwolf committed
129
```bash
130
pip install -e ".[testing]"
131
132
pip install -r examples/requirements.txt
make test-examples
thomwolf's avatar
thomwolf committed
133
```
thomwolf's avatar
thomwolf committed
134

135
For details, refer to the [contributing guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#tests).
136

137
138
139
140
### Do you want to run a Transformer model on a mobile device?

You should check out our [`swift-coreml-transformers`](https://github.com/huggingface/swift-coreml-transformers) repo.

141
It contains a set of tools to convert PyTorch or TensorFlow 2.0 trained Transformer models (currently contains `GPT-2`, `DistilGPT-2`, `BERT`, and `DistilBERT`) to CoreML models that run on iOS devices.
142

143
At some point in the future, you'll be able to seamlessly move from pre-training or fine-tuning models to productizing them in CoreML, or prototype a model or an app in CoreML then research its hyperparameters or architecture from TensorFlow 2.0 and/or PyTorch. Super exciting!
144

thomwolf's avatar
thomwolf committed
145
146
## Model architectures

147
🤗 Transformers currently provides the following NLU/NLG architectures:
thomwolf's avatar
thomwolf committed
148

149
150
151
152
153
154
155
156
157
158
159
160
161
1. **[BERT](https://huggingface.co/transformers/model_doc/bert.html)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
2. **[GPT](https://huggingface.co/transformers/model_doc/gpt.html)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
3. **[GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
4. **[Transformer-XL](https://huggingface.co/transformers/model_doc/transformerxl.html)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
5. **[XLNet](https://huggingface.co/transformers/model_doc/xlnet.html)** (from Google/CMU) released with the paper [​XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
6. **[XLM](https://huggingface.co/transformers/model_doc/xlm.html)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
7. **[RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html)** (from Facebook), released together with the paper a [Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
8. **[DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/master/examples/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/master/examples/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/master/examples/distillation) and a German version of DistilBERT.
9. **[CTRL](https://huggingface.co/transformers/model_doc/ctrl.html)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
10. **[CamemBERT](https://huggingface.co/transformers/model_doc/camembert.html)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
11. **[ALBERT](https://huggingface.co/transformers/model_doc/albert.html)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
12. **[T5](https://huggingface.co/transformers/model_doc/t5.html)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
13. **[XLM-RoBERTa](https://huggingface.co/transformers/model_doc/xlmroberta.html)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
thomwolf's avatar
thomwolf committed
162
14. **[MMBT](https://github.com/facebookresearch/mmbt/)** (from Facebook), released together with the paper a [Supervised Multimodal Bitransformers for Classifying Images and Text](https://arxiv.org/pdf/1909.02950.pdf) by Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Davide Testuggine.
163
164
165
15. **[FlauBERT](https://huggingface.co/transformers/model_doc/flaubert.html)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
16. **[BART](https://huggingface.co/transformers/model_doc/bart.html)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
17. **[ELECTRA](https://huggingface.co/transformers/model_doc/electra.html)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
Patrick von Platen's avatar
Patrick von Platen committed
166
18. **[DialoGPT](https://huggingface.co/transformers/model_doc/dialogpt.html)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
Patrick von Platen's avatar
Patrick von Platen committed
167
19. **[Reformer](https://huggingface.co/transformers/model_doc/reformer.html)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
168
20. **[MarianMT](https://huggingface.co/transformers/model_doc/marian.html)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
Iz Beltagy's avatar
Iz Beltagy committed
169
21. **[Longformer](https://huggingface.co/transformers/model_doc/longformer.html)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
170
171
172
173
174
175
176
22. **[DPR](https://github.com/facebookresearch/DPR)** (from Facebook) released with the paper [Dense Passage Retrieval
for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon
Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
23. **[Pegasus](https://github.com/google-research/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777)> by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
24. **[MBart](https://github.com/pytorch/fairseq/tree/master/examples/mbart)** (from Facebook) released with the paper  [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.  
25. **[Other community models](https://huggingface.co/models)**, contributed by the [community](https://huggingface.co/users).
26. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
thomwolf's avatar
thomwolf committed
177

Tim Suchanek's avatar
Tim Suchanek committed
178
These implementations have been tested on several datasets (see the example scripts) and should match the performances of the original implementations (e.g. ~93 F1 on SQuAD for BERT Whole-Word-Masking, ~88 F1 on RocStories for OpenAI GPT, ~18.3 perplexity on WikiText 103 for Transformer-XL, ~0.916 Pearson R coefficient on STS-B for XLNet). You can find more details on the performances in the Examples section of the [documentation](https://huggingface.co/transformers/examples.html).
thomwolf's avatar
thomwolf committed
179

180
181
## Online demo

182
183
184
185
186
187
188
189
190
191
You can test our inference API on most model pages from the model hub: https://huggingface.co/models

For example: 
- [Masked word completion with BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [NER with Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [Text generation with GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
- [NLI with RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [Summarization with BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [Question answering with DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [Translation with T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
192
193


194
**[Write With Transformer](https://transformer.huggingface.co)**, built by the Hugging Face team at transformer.huggingface.co, is the official demo of this repo’s text generation capabilities.
195

thomwolf's avatar
thomwolf committed
196
## Quick tour
thomwolf's avatar
thomwolf committed
197

thomwolf's avatar
thomwolf committed
198
Let's do a very quick overview of the model architectures in 🤗 Transformers. Detailed examples for each model architecture (Bert, GPT, GPT-2, Transformer-XL, XLNet and XLM) can be found in the [full documentation](https://huggingface.co/transformers/).
thomwolf's avatar
thomwolf committed
199
200
201

```python
import torch
202
from transformers import *
thomwolf's avatar
thomwolf committed
203

204
# Transformers has a unified API
205
# for 10 transformer architectures and 30 pretrained weights.
thomwolf's avatar
thomwolf committed
206
#          Model          | Tokenizer          | Pretrained weights shortcut
thomwolf's avatar
thomwolf committed
207
208
209
MODELS = [(BertModel,       BertTokenizer,       'bert-base-uncased'),
          (OpenAIGPTModel,  OpenAIGPTTokenizer,  'openai-gpt'),
          (GPT2Model,       GPT2Tokenizer,       'gpt2'),
keskarnitish's avatar
keskarnitish committed
210
          (CTRLModel,       CTRLTokenizer,       'ctrl'),
thomwolf's avatar
thomwolf committed
211
212
213
          (TransfoXLModel,  TransfoXLTokenizer,  'transfo-xl-wt103'),
          (XLNetModel,      XLNetTokenizer,      'xlnet-base-cased'),
          (XLMModel,        XLMTokenizer,        'xlm-mlm-enfr-1024'),
214
          (DistilBertModel, DistilBertTokenizer, 'distilbert-base-cased'),
215
216
217
          (RobertaModel,    RobertaTokenizer,    'roberta-base'),
          (XLMRobertaModel, XLMRobertaTokenizer, 'xlm-roberta-base'),
         ]
thomwolf's avatar
thomwolf committed
218

thomwolf's avatar
thomwolf committed
219
220
# To use TensorFlow 2.0 versions of the models, simply prefix the class names with 'TF', e.g. `TFRobertaModel` is the TF 2.0 counterpart of the PyTorch model `RobertaModel`

thomwolf's avatar
thomwolf committed
221
222
223
224
225
226
227
# Let's encode some text in a sequence of hidden-states using each model:
for model_class, tokenizer_class, pretrained_weights in MODELS:
    # Load pretrained model/tokenizer
    tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
    model = model_class.from_pretrained(pretrained_weights)

    # Encode text
228
    input_ids = torch.tensor([tokenizer.encode("Here is some text to encode", add_special_tokens=True)])  # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model.
Thomas Wolf's avatar
Thomas Wolf committed
229
230
    with torch.no_grad():
        last_hidden_states = model(input_ids)[0]  # Models outputs are now tuples
thomwolf's avatar
thomwolf committed
231
232
233

# Each architecture is provided with several class for fine-tuning on down-stream tasks, e.g.
BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction,
thomwolf's avatar
thomwolf committed
234
                      BertForSequenceClassification, BertForTokenClassification, BertForQuestionAnswering]
thomwolf's avatar
thomwolf committed
235

thomwolf's avatar
thomwolf committed
236
237
238
# All the classes for an architecture can be initiated from pretrained weights for this architecture
# Note that additional weights added for fine-tuning are only initialized
# and need to be trained on the down-stream task
239
240
pretrained_weights = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(pretrained_weights)
thomwolf's avatar
thomwolf committed
241
242
for model_class in BERT_MODEL_CLASSES:
    # Load pretrained model/tokenizer
243
    model = model_class.from_pretrained(pretrained_weights)
thomwolf's avatar
thomwolf committed
244

Santosh Gupta's avatar
Santosh Gupta committed
245
246
247
248
249
250
    # Models can return full list of hidden-states & attentions weights at each layer
    model = model_class.from_pretrained(pretrained_weights,
                                        output_hidden_states=True,
                                        output_attentions=True)
    input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")])
    all_hidden_states, all_attentions = model(input_ids)[-2:]
thomwolf's avatar
thomwolf committed
251

Santosh Gupta's avatar
Santosh Gupta committed
252
253
254
    # Models are compatible with Torchscript
    model = model_class.from_pretrained(pretrained_weights, torchscript=True)
    traced_model = torch.jit.trace(model, (input_ids,))
thomwolf's avatar
thomwolf committed
255

Santosh Gupta's avatar
Santosh Gupta committed
256
257
258
259
260
    # Simple serialization for models and tokenizers
    model.save_pretrained('./directory/to/save/')  # save
    model = model_class.from_pretrained('./directory/to/save/')  # re-load
    tokenizer.save_pretrained('./directory/to/save/')  # save
    tokenizer = BertTokenizer.from_pretrained('./directory/to/save/')  # re-load
thomwolf's avatar
thomwolf committed
261

Santosh Gupta's avatar
Santosh Gupta committed
262
    # SOTA examples for GLUE, SQUAD, text generation...
thomwolf's avatar
thomwolf committed
263
264
```

thomwolf's avatar
thomwolf committed
265
266
267
268
269
270
271
## Quick tour TF 2.0 training and PyTorch interoperability

Let's do a quick example of how a TensorFlow 2.0 model can be trained in 12 lines of code with 🤗 Transformers and then loaded in PyTorch for fast inspection/tests.

```python
import tensorflow as tf
import tensorflow_datasets
thomwolf's avatar
thomwolf committed
272
from transformers import *
thomwolf's avatar
thomwolf committed
273
274
275
276
277
278
279

# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')

# Prepare dataset for GLUE as a tf.data.Dataset instance
thomwolf's avatar
thomwolf committed
280
281
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
thomwolf's avatar
thomwolf committed
282
283
284
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)

285
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
thomwolf's avatar
thomwolf committed
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])

# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
                    validation_data=valid_dataset, validation_steps=7)

# Load the TensorFlow model in PyTorch for inspection
model.save_pretrained('./save/')
pytorch_model = BertForSequenceClassification.from_pretrained('./save/', from_tf=True)

# Quickly test a few predictions - MRPC is a paraphrasing task, let's see if our model learned the task
sentence_0 = "This research was consistent with his findings."
sentence_1 = "His findings were compatible with this research."
sentence_2 = "His findings were not compatible with this research."
303
304
inputs_1 = tokenizer(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt')
inputs_2 = tokenizer(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt')
thomwolf's avatar
thomwolf committed
305

306
307
308
pred_1 = pytorch_model(inputs_1['input_ids'], token_type_ids=inputs_1['token_type_ids'])[0].argmax().item()
pred_2 = pytorch_model(inputs_2['input_ids'], token_type_ids=inputs_2['token_type_ids'])[0].argmax().item()

thomwolf's avatar
thomwolf committed
309
310
311
312
print("sentence_1 is", "a paraphrase" if pred_1 else "not a paraphrase", "of sentence_0")
print("sentence_2 is", "a paraphrase" if pred_2 else "not a paraphrase", "of sentence_0")
```

thomwolf's avatar
thomwolf committed
313
## Quick tour of the fine-tuning/usage scripts
thomwolf's avatar
thomwolf committed
314

315
**Important**
316
317
318
319
Before running the fine-tuning scripts, please read the
[instructions](#run-the-examples) on how to
setup your environment to run the examples.

thomwolf's avatar
thomwolf committed
320
The library comprises several example scripts with SOTA performances for NLU and NLG tasks:
thomwolf's avatar
thomwolf committed
321

Julien Chaumond's avatar
Julien Chaumond committed
322
323
324
- `run_glue.py`: an example fine-tuning sequence classification models on nine different GLUE tasks (*sequence-level classification*)
- `run_squad.py`: an example fine-tuning question answering models on the question answering dataset SQuAD 2.0 (*token-level classification*)
- `run_ner.py`: an example fine-tuning token classification models on named entity recognition (*token-level classification*)
keskarnitish's avatar
keskarnitish committed
325
- `run_generation.py`: an example using GPT, GPT-2, CTRL, Transformer-XL and XLNet for conditional language generation
thomwolf's avatar
thomwolf committed
326
- other model-specific examples (see the documentation).
thomwolf's avatar
thomwolf committed
327

thomwolf's avatar
thomwolf committed
328
Here are three quick usage examples for these scripts:
thomwolf's avatar
thomwolf committed
329

thomwolf's avatar
thomwolf committed
330
### `run_glue.py`: Fine-tuning on GLUE tasks for sequence classification
thomwolf's avatar
thomwolf committed
331

thomwolf's avatar
thomwolf committed
332
The [General Language Understanding Evaluation (GLUE) benchmark](https://gluebenchmark.com/) is a collection of nine sentence- or sentence-pair language understanding tasks for evaluating and analyzing natural language understanding systems.
thomwolf's avatar
thomwolf committed
333

Julien Chaumond's avatar
Julien Chaumond committed
334
Before running any of these GLUE tasks you should download the
thomwolf's avatar
thomwolf committed
335
336
337
[GLUE data](https://gluebenchmark.com/tasks) by running
[this script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e)
and unpack it to some directory `$GLUE_DIR`.
thomwolf's avatar
thomwolf committed
338

339
340
341
342
343
344
You should also install the additional packages required by the examples:

```shell
pip install -r ./examples/requirements.txt
```

thomwolf's avatar
thomwolf committed
345
346
347
```shell
export GLUE_DIR=/path/to/glue
export TASK_NAME=MRPC
thomwolf's avatar
thomwolf committed
348

349
python ./examples/text-classification/run_glue.py \
350
351
352
353
354
355
    --model_name_or_path bert-base-uncased \
    --task_name $TASK_NAME \
    --do_train \
    --do_eval \
    --data_dir $GLUE_DIR/$TASK_NAME \
    --max_seq_length 128 \
356
357
    --per_device_eval_batch_size=8   \
    --per_device_train_batch_size=8   \
358
359
360
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/$TASK_NAME/
thomwolf's avatar
thomwolf committed
361
362
```

thomwolf's avatar
thomwolf committed
363
where task name can be one of CoLA, SST-2, MRPC, STS-B, QQP, MNLI, QNLI, RTE, WNLI.
thomwolf's avatar
thomwolf committed
364

thomwolf's avatar
thomwolf committed
365
The dev set results will be present within the text file 'eval_results.txt' in the specified output_dir. In case of MNLI, since there are two separate dev sets, matched and mismatched, there will be a separate output folder called '/tmp/MNLI-MM/' in addition to '/tmp/MNLI/'.
thomwolf's avatar
thomwolf committed
366

thomwolf's avatar
thomwolf committed
367
#### Fine-tuning XLNet model on the STS-B regression task
thomwolf's avatar
thomwolf committed
368

thomwolf's avatar
thomwolf committed
369
This example code fine-tunes XLNet on the STS-B corpus using parallel training on a server with 4 V100 GPUs.
370
Parallel training is a simple way to use several GPUs (but is slower and less flexible than distributed training, see below).
thomwolf's avatar
thomwolf committed
371

thomwolf's avatar
thomwolf committed
372
373
```shell
export GLUE_DIR=/path/to/glue
thomwolf's avatar
thomwolf committed
374

375
python ./examples/text-classification/run_glue.py \
thomwolf's avatar
thomwolf committed
376
377
    --model_name_or_path xlnet-large-cased \
    --do_train  \
378
    --do_eval   \
thomwolf's avatar
thomwolf committed
379
380
381
382
    --task_name=sts-b     \
    --data_dir=${GLUE_DIR}/STS-B  \
    --output_dir=./proc_data/sts-b-110   \
    --max_seq_length=128   \
383
384
    --per_device_eval_batch_size=8   \
    --per_device_train_batch_size=8   \
thomwolf's avatar
thomwolf committed
385
386
387
388
389
390
    --gradient_accumulation_steps=1 \
    --max_steps=1200  \
    --model_name=xlnet-large-cased   \
    --overwrite_output_dir   \
    --overwrite_cache \
    --warmup_steps=120
thomwolf's avatar
thomwolf committed
391
392
```

Anthony MOI's avatar
Anthony MOI committed
393
On this machine we thus have a batch size of 32, please increase `gradient_accumulation_steps` to reach the same batch size if you have a smaller machine. These hyper-parameters should result in a Pearson correlation coefficient of `+0.917` on the development set.
thomwolf's avatar
thomwolf committed
394

thomwolf's avatar
thomwolf committed
395
#### Fine-tuning Bert model on the MRPC classification task
thomwolf's avatar
thomwolf committed
396

thomwolf's avatar
thomwolf committed
397
This example code fine-tunes the Bert Whole Word Masking model on the Microsoft Research Paraphrase Corpus (MRPC) corpus using distributed training on 8 V100 GPUs to reach a F1 > 92.
thomwolf's avatar
thomwolf committed
398

thomwolf's avatar
thomwolf committed
399
```bash
400
python -m torch.distributed.launch --nproc_per_node 8 ./examples/text-classification/run_glue.py   \
thomwolf's avatar
thomwolf committed
401
402
403
404
405
406
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --task_name MRPC \
    --do_train   \
    --do_eval   \
    --data_dir $GLUE_DIR/MRPC/   \
    --max_seq_length 128   \
407
408
    --per_device_eval_batch_size=8   \
    --per_device_train_batch_size=8   \
thomwolf's avatar
thomwolf committed
409
410
411
412
413
    --learning_rate 2e-5   \
    --num_train_epochs 3.0  \
    --output_dir /tmp/mrpc_output/ \
    --overwrite_output_dir   \
    --overwrite_cache \
thomwolf's avatar
thomwolf committed
414
415
```

thomwolf's avatar
thomwolf committed
416
Training with these hyper-parameters gave us the following results:
thomwolf's avatar
thomwolf committed
417

thomwolf's avatar
thomwolf committed
418
419
420
421
422
423
424
```bash
  acc = 0.8823529411764706
  acc_and_f1 = 0.901702786377709
  eval_loss = 0.3418912578906332
  f1 = 0.9210526315789473
  global_step = 174
  loss = 0.07231863956341798
thomwolf's avatar
thomwolf committed
425
426
```

thomwolf's avatar
thomwolf committed
427
### `run_squad.py`: Fine-tuning on SQuAD for question-answering
thomwolf's avatar
thomwolf committed
428

thomwolf's avatar
thomwolf committed
429
This example code fine-tunes BERT on the SQuAD dataset using distributed training on 8 V100 GPUs and Bert Whole Word Masking uncased model to reach a F1 > 93 on SQuAD:
thomwolf's avatar
thomwolf committed
430

thomwolf's avatar
thomwolf committed
431
```bash
432
python -m torch.distributed.launch --nproc_per_node=8 ./examples/question-answering/run_squad.py \
thomwolf's avatar
thomwolf committed
433
434
435
    --model_type bert \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --do_train \
thomwolf's avatar
thomwolf committed
436
    --do_eval \
thomwolf's avatar
thomwolf committed
437
438
439
440
441
442
443
    --train_file $SQUAD_DIR/train-v1.1.json \
    --predict_file $SQUAD_DIR/dev-v1.1.json \
    --learning_rate 3e-5 \
    --num_train_epochs 2 \
    --max_seq_length 384 \
    --doc_stride 128 \
    --output_dir ../models/wwm_uncased_finetuned_squad/ \
444
445
    --per_device_eval_batch_size=3   \
    --per_device_train_batch_size=3   \
thomwolf's avatar
thomwolf committed
446
447
```

thomwolf's avatar
thomwolf committed
448
Training with these hyper-parameters gave us the following results:
thomwolf's avatar
thomwolf committed
449

thomwolf's avatar
thomwolf committed
450
451
452
```bash
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json ../models/wwm_uncased_finetuned_squad/predictions.json
{"exact_match": 86.91579943235573, "f1": 93.1532499015869}
thomwolf's avatar
thomwolf committed
453
454
```

thomwolf's avatar
thomwolf committed
455
This is the model provided as `bert-large-uncased-whole-word-masking-finetuned-squad`.
456

keskarnitish's avatar
keskarnitish committed
457
### `run_generation.py`: Text generation with GPT, GPT-2, CTRL, Transformer-XL and XLNet
458

thomwolf's avatar
thomwolf committed
459
A conditional generation script is also included to generate text from a prompt.
DenysNahurnyi's avatar
DenysNahurnyi committed
460
The generation script includes the [tricks](https://github.com/rusiaaman/XLNet-gen#methodology) proposed by Aman Rusia to get high-quality generation with memory models like Transformer-XL and XLNet (include a predefined text to make short inputs longer).
461

thomwolf's avatar
thomwolf committed
462
Here is how to run the script with the small version of OpenAI GPT-2 model:
463

thomwolf's avatar
thomwolf committed
464
```shell
465
python ./examples/text-generation/run_generation.py \
thomwolf's avatar
thomwolf committed
466
467
468
    --model_type=gpt2 \
    --length=20 \
    --model_name_or_path=gpt2 \
469
470
```

471
and from the Salesforce CTRL model:
keskarnitish's avatar
keskarnitish committed
472
```shell
473
python ./examples/text-generation/run_generation.py \
keskarnitish's avatar
keskarnitish committed
474
475
    --model_type=ctrl \
    --length=20 \
476
    --model_name_or_path=ctrl \
keskarnitish's avatar
keskarnitish committed
477
478
479
480
    --temperature=0 \
    --repetition_penalty=1.2 \
```

481
482
## Quick tour of model sharing

Julien Chaumond's avatar
Julien Chaumond committed
483
Starting with `v2.2.2`, you can now upload and share your fine-tuned models with the community, using the <abbr title="Command-line interface">CLI</abbr> that's built-in to the library.
484

485
**First, create an account on [https://huggingface.co/join](https://huggingface.co/join)**. Optionally, join an existing organization or create a new one. Then:
486
487
488
489
490
491
492
493
494
495
496
497

```shell
transformers-cli login
# log in using the same credentials as on huggingface.co
```
Upload your model:
```shell
transformers-cli upload ./path/to/pretrained_model/

# ^^ Upload folder containing weights/tokenizer/config
# saved via `.save_pretrained()`

Julien Chaumond's avatar
Julien Chaumond committed
498
transformers-cli upload ./config.json [--filename folder/foobar.json]
499
500

# ^^ Upload a single file
Julien Chaumond's avatar
Julien Chaumond committed
501
# (you can optionally override its filename, which can be nested inside a folder)
502
503
```

504
505
506
507
508
509
If you want your model to be namespaced by your organization name rather than your username, add the following flag to any command:
```shell
--organization organization_name
```

Your model will then be accessible through its identifier, a concatenation of your username (or organization name) and the folder name above:
510
```python
Julien Chaumond's avatar
Julien Chaumond committed
511
512
513
"username/pretrained_model"
# or if an org:
"organization_name/pretrained_model"
514
515
```

516
**Please add a README.md model card** to the repo under `model_cards/` with: model description, training params (dataset, preprocessing, hardware used, hyperparameters), evaluation results, intended uses & limitations, etc.
517
518
519

Your model now has a page on huggingface.co/models 🔥

520
521
Anyone can load it from code:
```python
522
523
tokenizer = AutoTokenizer.from_pretrained("namespace/pretrained_model")
model = AutoModel.from_pretrained("namespace/pretrained_model")
524
525
```

526
List all your files on S3:
527
```shell
Julien Chaumond's avatar
Julien Chaumond committed
528
transformers-cli s3 ls
529
530
```

531
You can also delete unneeded files:
Julien Chaumond's avatar
Julien Chaumond committed
532
533
534
535
536

```shell
transformers-cli s3 rm
```

537
538
539
## Quick tour of pipelines

New in version `v2.3`: `Pipeline` are high-level objects which automatically handle tokenization, running your data through a transformers model
540
and outputting the result in a structured object.
541
542

You can create `Pipeline` objects for the following down-stream tasks:
thomwolf's avatar
thomwolf committed
543

544
545
 - `feature-extraction`: Generates a tensor representation for the input sequence
 - `ner`: Generates named entity mapping for each word in the input sequence.
546
 - `sentiment-analysis`: Gives the polarity (positive / negative) of the whole input sequence.
Julien Chaumond's avatar
Julien Chaumond committed
547
548
549
 - `text-classification`: Initialize a `TextClassificationPipeline` directly, or see `sentiment-analysis` for an example.
 - `question-answering`: Provided some context and a question refering to the context, it will extract the answer to the question in the context.
 - `fill-mask`: Takes an input sequence containing a masked token (e.g. `<mask>`) and return list of most probable filled sequences, with their probabilities.
550
551
 - `summarization`
 - `translation_xx_to_yy`
552
553

```python
554
>>> from transformers import pipeline
555
556

# Allocate a pipeline for sentiment-analysis
557
558
559
>>> nlp = pipeline('sentiment-analysis')
>>> nlp('We are very happy to include pipeline into the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9978193640708923}]
560
561

# Allocate a pipeline for question-answering
562
563
564
565
566
567
568
>>> nlp = pipeline('question-answering')
>>> nlp({
...     'question': 'What is the name of the repository ?',
...     'context': 'Pipeline have been included in the huggingface/transformers repository'
... })
{'score': 0.5135612454720828, 'start': 35, 'end': 59, 'answer': 'huggingface/transformers'}

569
570
```

thomwolf's avatar
thomwolf committed
571
572
573
574
575
576
577
578
579
580
581
582
583
## Migrating from pytorch-transformers to transformers

Here is a quick summary of what you should take care of when migrating from `pytorch-transformers` to `transformers`.

### Positional order of some models' keywords inputs (`attention_mask`, `token_type_ids`...) changed

To be able to use Torchscript (see #1010, #1204 and #1195) the specific order of some models **keywords inputs** (`attention_mask`, `token_type_ids`...) has been changed.

If you used to call the models with keyword names for keyword arguments, e.g. `model(inputs_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)`, this should not cause any change.

If you used to call the models with positional inputs for keyword arguments, e.g. `model(inputs_ids, attention_mask, token_type_ids)`, you may have to double check the exact order of input arguments.


584
## Migrating from pytorch-pretrained-bert to transformers
thomwolf's avatar
thomwolf committed
585

thomwolf's avatar
thomwolf committed
586
Here is a quick summary of what you should take care of when migrating from `pytorch-pretrained-bert` to `transformers`.
thomwolf's avatar
thomwolf committed
587
588
589

### Models always output `tuples`

590
The main breaking change when migrating from `pytorch-pretrained-bert` to `transformers` is that every model's forward method always outputs a `tuple` with various elements depending on the model and the configuration parameters.
thomwolf's avatar
thomwolf committed
591

DenysNahurnyi's avatar
DenysNahurnyi committed
592
The exact content of the tuples for each model is detailed in the models' docstrings and the [documentation](https://huggingface.co/transformers/).
thomwolf's avatar
thomwolf committed
593
594
595

In pretty much every case, you will be fine by taking the first element of the output as the output you previously used in `pytorch-pretrained-bert`.

596
Here is a `pytorch-pretrained-bert` to `transformers` conversion example for a `BertForSequenceClassification` classification model:
thomwolf's avatar
thomwolf committed
597
598
599
600
601
602
603
604

```python
# Let's load our model
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')

# If you used to have this line in pytorch-pretrained-bert:
loss = model(input_ids, labels=labels)

605
# Now just use this line in transformers to extract the loss from the output tuple:
thomwolf's avatar
thomwolf committed
606
607
608
outputs = model(input_ids, labels=labels)
loss = outputs[0]

609
# In transformers you can also have access to the logits:
thomwolf's avatar
thomwolf committed
610
611
loss, logits = outputs[:2]

612
# And even the attention weights if you configure the model to output them (and other outputs too, see the docstrings and documentation)
thomwolf's avatar
thomwolf committed
613
614
615
616
617
model = BertForSequenceClassification.from_pretrained('bert-base-uncased', output_attentions=True)
outputs = model(input_ids, labels=labels)
loss, logits, attentions = outputs
```

618
619
620
621
### Using hidden states

By enabling the configuration option `output_hidden_states`, it was possible to retrieve the last hidden states of the encoder. In `pytorch-transformers` as well as `transformers` the return value has changed slightly: `all_hidden_states` now also includes the hidden state of the embeddings in addition to those of the encoding layers. This allows users to easily access the embeddings final state.

thomwolf's avatar
thomwolf committed
622
623
### Serialization

DenysNahurnyi's avatar
DenysNahurnyi committed
624
Breaking change in the `from_pretrained()` method:
625

Christopher Goh's avatar
Christopher Goh committed
626
1. Models are now set in evaluation mode by default when instantiated with the `from_pretrained()` method. To train them, don't forget to set them back in training mode (`model.train()`) to activate the dropout modules.
627

Julien Chaumond's avatar
Julien Chaumond committed
628
2. The additional `*input` and `**kwargs` arguments supplied to the `from_pretrained()` method used to be directly passed to the underlying model's class `__init__()` method. They are now used to update the model configuration attribute instead, which can break derived model classes built based on the previous `BertForSequenceClassification` examples. We are working on a way to mitigate this breaking change in [#866](https://github.com/huggingface/transformers/pull/866) by forwarding the model's `__init__()` method (i) the provided positional arguments and (ii) the keyword arguments which do not match any configuration class attributes.
629

thomwolf's avatar
typos  
thomwolf committed
630
Also, while not a breaking change, the serialization methods have been standardized and you probably should switch to the new method `save_pretrained(save_directory)` if you were using any other serialization method before.
thomwolf's avatar
thomwolf committed
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656

Here is an example:

```python
### Let's load a model and tokenizer
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

### Do some stuff to our model and tokenizer
# Ex: add new tokens to the vocabulary and embeddings of our model
tokenizer.add_tokens(['[SPECIAL_TOKEN_1]', '[SPECIAL_TOKEN_2]'])
model.resize_token_embeddings(len(tokenizer))
# Train our model
train(model)

### Now let's save our model and tokenizer to a directory
model.save_pretrained('./my_saved_model_directory/')
tokenizer.save_pretrained('./my_saved_model_directory/')

### Reload the model and the tokenizer
model = BertForSequenceClassification.from_pretrained('./my_saved_model_directory/')
tokenizer = BertTokenizer.from_pretrained('./my_saved_model_directory/')
```

### Optimizers: BertAdam & OpenAIAdam are now AdamW, schedules are standard PyTorch schedules

657
658
659
660
661
662
663
The two optimizers previously included, `BertAdam` and `OpenAIAdam`, have been replaced by a single `AdamW` optimizer which has a few differences:

- it only implements weights decay correction,
- schedules are now externals (see below),
- gradient clipping is now also external (see below).

The new optimizer `AdamW` matches PyTorch `Adam` optimizer API and let you use standard PyTorch or apex methods for the schedule and clipping.
thomwolf's avatar
thomwolf committed
664
665
666
667
668
669
670
671

The schedules are now standard [PyTorch learning rate schedulers](https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate) and not part of the optimizer anymore.

Here is a conversion examples from `BertAdam` with a linear warmup and decay schedule to `AdamW` and the same schedule:

```python
# Parameters:
lr = 1e-3
672
max_grad_norm = 1.0
673
num_training_steps = 1000
thomwolf's avatar
thomwolf committed
674
num_warmup_steps = 100
675
warmup_proportion = float(num_warmup_steps) / float(num_training_steps)  # 0.1
thomwolf's avatar
thomwolf committed
676
677

### Previously BertAdam optimizer was instantiated like this:
678
optimizer = BertAdam(model.parameters(), lr=lr, schedule='warmup_linear', warmup=warmup_proportion, t_total=num_training_steps)
thomwolf's avatar
thomwolf committed
679
680
681
682
683
684
### and used like this:
for batch in train_data:
    loss = model(batch)
    loss.backward()
    optimizer.step()

685
### In Transformers, optimizer and schedules are splitted and instantiated like this:
thomwolf's avatar
thomwolf committed
686
optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False)  # To reproduce BertAdam specific behavior set correct_bias=False
687
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps)  # PyTorch scheduler
thomwolf's avatar
thomwolf committed
688
689
### and used like this:
for batch in train_data:
690
    model.train()
thomwolf's avatar
thomwolf committed
691
692
    loss = model(batch)
    loss.backward()
693
    torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)  # Gradient clipping is not in AdamW anymore (so you can use amp without issue)
thomwolf's avatar
thomwolf committed
694
    optimizer.step()
thomwolf's avatar
thomwolf committed
695
    scheduler.step()
thomwolf's avatar
thomwolf committed
696
    optimizer.zero_grad()
thomwolf's avatar
thomwolf committed
697
698
```

thomwolf's avatar
thomwolf committed
699
## Citation
thomwolf's avatar
thomwolf committed
700

Sam Shleifer's avatar
Sam Shleifer committed
701
We now have a [paper](https://arxiv.org/abs/1910.03771) you can cite for the 🤗 Transformers library:
702
```bibtex
thomwolf's avatar
thomwolf committed
703
704
@article{Wolf2019HuggingFacesTS,
  title={HuggingFace's Transformers: State-of-the-art Natural Language Processing},
Sam Shleifer's avatar
Sam Shleifer committed
705
  author={Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush},
thomwolf's avatar
thomwolf committed
706
707
708
  journal={ArXiv},
  year={2019},
  volume={abs/1910.03771}
thomwolf's avatar
thomwolf committed
709
710
}
```