pretrained_models.rst 65.8 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
2
3
4
5
Pretrained models
================================================

Here is the full list of the currently provided pretrained models together with a short presentation of each model.

6
For a list that includes community-uploaded models, refer to `https://huggingface.co/models <https://huggingface.co/models>`__.
thomwolf's avatar
thomwolf committed
7

LysandreJik's avatar
Doc  
LysandreJik committed
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Architecture      | Shortcut name                                              | Details of the model                                                                                                                  |
+===================+============================================================+=======================================================================================================================================+
| BERT              | ``bert-base-uncased``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased``                                     | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased``                                        | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased``                                       | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-uncased``                         | | (Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                        |
|                   |                                                            | | Trained on lower-cased text in the top 102 languages with the largest Wikipedias                                                    |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-cased``                           | | (New, **recommended**) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                             |
|                   |                                                            | | Trained on cased text in the top 104 languages with the largest Wikipedias                                                          |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-chinese``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Chinese Simplified and Traditional text.                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-cased``                                 | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by Deepset.ai                                                                                          |
|                   |                                                            | (see `details on deepset.ai website <https://deepset.ai/german-bert>`__).                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking``                  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text using Whole-Word-Masking                                                                        |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking``                    | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text using Whole-Word-Masking                                                                              |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking-finetuned-squad``  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD                                                             |
LysandreJik's avatar
LysandreJik committed
48
|                   |                                                            | (see details of fine-tuning in the `example section <https://github.com/huggingface/transformers/tree/master/examples>`__).           |
LysandreJik's avatar
Doc  
LysandreJik committed
49
50
51
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking-finetuned-squad``    | | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                                    |
|                   |                                                            | | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD                                                               |
LysandreJik's avatar
LysandreJik committed
52
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
LysandreJik's avatar
Doc  
LysandreJik committed
53
54
55
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased-finetuned-mrpc``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | The ``bert-base-cased`` model fine-tuned on MRPC                                                                                    |
LysandreJik's avatar
LysandreJik committed
56
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
57
58
59
60
61
62
63
64
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-cased``                           | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by DBMDZ                                                                                               |
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-uncased``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on uncased German text by DBMDZ                                                                                             |
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
65
66
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese``                                     | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
67
68
69
|                   |                                                            | | Trained on Japanese text. Text is tokenized with MeCab and WordPiece.                                                               |
|                   |                                                            | | `MeCab <https://taku910.github.io/mecab/>`__ is required for tokenization.                                                          |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
70
71
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-whole-word-masking``                  | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
72
73
74
|                   |                                                            | | Trained on Japanese text using Whole-Word-Masking. Text is tokenized with MeCab and WordPiece.                                      |
|                   |                                                            | | `MeCab <https://taku910.github.io/mecab/>`__ is required for tokenization.                                                          |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
75
76
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-char``                                | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
77
78
|                   |                                                            | | Trained on Japanese text. Text is tokenized into characters.                                                                        |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
79
80
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-char-whole-word-masking``             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
81
82
|                   |                                                            | | Trained on Japanese text using Whole-Word-Masking. Text is tokenized into characters.                                               |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
83
84
85
86
87
88
89
90
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-finnish-cased-v1``                             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Finnish text.                                                                                                      |
|                   |                                                            | (see `details on turkunlp.org <http://turkunlp.org/FinBERT/>`__).                                                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-finnish-uncased-v1``                           | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on uncased Finnish text.                                                                                                    |
|                   |                                                            | (see `details on turkunlp.org <http://turkunlp.org/FinBERT/>`__).                                                                     |
91
92
93
94
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-dutch-cased``                                  | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Dutch text.                                                                                                        |
|                   |                                                            | (see `details on wietsedv repository <https://github.com/wietsedv/bertje/>`__).                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
95
96
97
98
99
100
101
102
103
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT               | ``openai-gpt``                                             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT English model                                                                                                            |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT-2             | ``gpt2``                                                   | | 12-layer, 768-hidden, 12-heads, 117M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT-2 English model                                                                                                          |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-medium``                                            | | 24-layer, 1024-hidden, 16-heads, 345M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Medium-sized GPT-2 English model                                                                                           |
104
105
106
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-large``                                             | | 36-layer, 1280-hidden, 20-heads, 774M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Large-sized GPT-2 English model                                                                                            |
Lysandre's avatar
Lysandre committed
107
108
109
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-xl``                                                | | 48-layer, 1600-hidden, 25-heads, 1558M parameters.                                                                                  |
|                   |                                                            | | OpenAI's XL-sized GPT-2 English model                                                                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
110
111
112
113
114
115
116
117
118
119
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Transformer-XL    | ``transfo-xl-wt103``                                       | | 18-layer, 1024-hidden, 16-heads, 257M parameters.                                                                                   |
|                   |                                                            | | English model trained on wikitext-103                                                                                               |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| XLNet             | ``xlnet-base-cased``                                       | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | XLNet English model                                                                                                                 |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlnet-large-cased``                                      | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | XLNet Large English model                                                                                                           |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
120
| XLM               | ``xlm-mlm-en-2048``                                        | | 12-layer, 2048-hidden, 16-heads                                                                                                     |
LysandreJik's avatar
Doc  
LysandreJik committed
121
122
|                   |                                                            | | XLM English model                                                                                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
123
|                   | ``xlm-mlm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
124
|                   |                                                            | | XLM English-German model trained on the concatenation of English and German wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
125
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
126
|                   | ``xlm-mlm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
127
|                   |                                                            | | XLM English-French model trained on the concatenation of English and French wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
128
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
129
|                   | ``xlm-mlm-enro-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
130
131
132
133
134
135
136
137
|                   |                                                            | | XLM English-Romanian Multi-language model                                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-xnli15-1024``                                    | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-tlm-xnli15-1024``                                | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
138
139
|                   | ``xlm-clm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
|                   |                                                            | | XLM English-French model trained with CLM (Causal Language Modeling) on the concatenation of English and French wikipedia           |
LysandreJik's avatar
Doc  
LysandreJik committed
140
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
141
|                   | ``xlm-clm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
142
|                   |                                                            | | XLM English-German model trained with CLM (Causal Language Modeling) on the concatenation of English and German wikipedia           |
LysandreJik's avatar
LysandreJik committed
143
144
145
146
147
148
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-17-1280``                                        | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 17 languages.                                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-100-1280``                                       | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 100 languages.                                                             |
LysandreJik's avatar
Doc  
LysandreJik committed
149
150
151
152
153
154
155
156
157
158
159
160
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| RoBERTa           | ``roberta-base``                                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | RoBERTa using the BERT-base architecture                                                                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large``                                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | RoBERTa using the BERT-large architecture                                                                                           |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-mnli``                                     | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned on `MNLI <http://www.nyu.edu/projects/bowman/multinli/>`__.                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
161
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
162
163
164
165
|                   | ``distilroberta-base``                                     | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
|                   |                                                            | | The DistilRoBERTa model distilled from the RoBERTa model `roberta-base` checkpoint.                                                 |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
166
167
168
169
170
171
172
|                   | ``roberta-base-openai-detector``                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | ``roberta-base`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                             |
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-openai-detector``                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                            |
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
173
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
LysandreJik's avatar
LysandreJik committed
174
175
| DistilBERT        | ``distilbert-base-uncased``                                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint                                                   |
VictorSanh's avatar
VictorSanh committed
176
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
177
178
179
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-uncased-distilled-squad``                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint, with an additional linear layer.                 |
VictorSanh's avatar
VictorSanh committed
180
181
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
182
183
184
185
186
187
188
189
|                   | ``distilbert-base-cased``                                  | | 6-layer, 768-hidden, 12-heads, 65M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-cased` checkpoint                                                     |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-cased-distilled-squad``                  | | 6-layer, 768-hidden, 12-heads, 65M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-cased` checkpoint, with an additional question answering layer.       |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
VictorSanh's avatar
VictorSanh committed
190
|                   | ``distilgpt2``                                             | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
VictorSanh's avatar
VictorSanh committed
191
192
|                   |                                                            | | The DistilGPT2 model distilled from the GPT2 model `gpt2` checkpoint.                                                               |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
193
194
195
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-german-cased``                           | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The German DistilBERT model distilled from the German DBMDZ BERT model `bert-base-german-dbmdz-cased` checkpoint.                   |
VictorSanh's avatar
VictorSanh committed
196
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
VictorSanh's avatar
VictorSanh committed
197
198
199
200
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-multilingual-cased``                     | | 6-layer, 768-hidden, 12-heads, 134M parameters                                                                                      |
|                   |                                                            | | The multilingual DistilBERT model distilled from the Multilingual BERT model `bert-base-multilingual-cased` checkpoint.             |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
201
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
202
203
204
| CTRL              | ``ctrl``                                                   | | 48-layer, 1280-hidden, 16-heads, 1.6B parameters                                                                                    |
|                   |                                                            | | Salesforce's Large-sized CTRL English model                                                                                         |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
205
206
207
208
| CamemBERT         | ``camembert-base``                                         | | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                                     |
|                   |                                                            | | CamemBERT using the BERT-base architecture                                                                                          |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/camembert>`__)                                                 |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
209
210
| ALBERT            | ``albert-base-v1``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model                                                                                                                   |
211
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
212
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
213
214
|                   | ``albert-large-v1``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model                                                                                                                  |
215
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
216
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
217
218
|                   | ``albert-xlarge-v1``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model                                                                                                                 |
219
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
220
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
221
222
|                   | ``albert-xxlarge-v1``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model                                                                                                                |
223
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
224
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
225
226
|                   | ``albert-base-v2``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model with no dropout, additional training data and longer training                                                     |
227
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
228
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
229
230
|                   | ``albert-large-v2``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model with no dropout, additional training data and longer training                                                    |
231
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
232
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
233
234
|                   | ``albert-xlarge-v2``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model with no dropout, additional training data and longer training                                                   |
235
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
236
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
237
238
|                   | ``albert-xxlarge-v2``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model with no dropout, additional training data and longer training                                                  |
239
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
240
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
241
242
| T5                | ``t5-small``                                               | | ~60M parameters with 6-layers, 512-hidden-state, 2048 feed-forward hidden-state, 8-heads,                                           |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
243
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
244
245
|                   | ``t5-base``                                                | | ~220M parameters with 12-layers, 768-hidden-state, 3072 feed-forward hidden-state, 12-heads,                                        |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
246
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
247
248
|                   | ``t5-large``                                               | | ~770M parameters with 24-layers, 1024-hidden-state, 4096 feed-forward hidden-state, 16-heads,                                       |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
249
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
250
251
|                   | ``t5-3B``                                                  | | ~2.8B parameters with 24-layers, 1024-hidden-state, 16384 feed-forward hidden-state, 32-heads,                                      |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
252
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
253
254
|                   | ``t5-11B``                                                 | | ~11B parameters with 24-layers, 1024-hidden-state, 65536 feed-forward hidden-state, 128-heads,                                      |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
255
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
256
257
258
259
260
261
| XLM-RoBERTa       | ``xlm-roberta-base``                                       | | ~125M parameters with 12-layers, 768-hidden-state, 3072 feed-forward hidden-state, 8-heads,                                         |
|                   |                                                            | | Trained on on 2.5 TB of newly created clean CommonCrawl data in 100 languages                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-roberta-large``                                      | | ~355M parameters with 24-layers, 1027-hidden-state, 4096 feed-forward hidden-state, 16-heads,                                       |
|                   |                                                            | | Trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages                                                          |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
| FlauBERT          | ``flaubert-small-cased``                                   | | 6-layer, 512-hidden, 8-heads, 54M parameters                                                                                        |
|                   |                                                            | | FlauBERT small architecture                                                                                                         |
|                   |                                                            | (see `details <https://github.com/getalp/Flaubert>`__)                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``flaubert-base-uncased``                                  | | 12-layer, 768-hidden, 12-heads, 137M parameters                                                                                     |
|                   |                                                            | | FlauBERT base architecture with uncased vocabulary                                                                                  |
|                   |                                                            | (see `details <https://github.com/getalp/Flaubert>`__)                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``flaubert-base-cased``                                    | | 12-layer, 768-hidden, 12-heads, 138M parameters                                                                                     |
|                   |                                                            | | FlauBERT base architecture with cased vocabulary                                                                                    |
|                   |                                                            | (see `details <https://github.com/getalp/Flaubert>`__)                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``flaubert-large-cased``                                   | | 24-layer, 1024-hidden, 16-heads, 373M parameters                                                                                    |
|                   |                                                            | | FlauBERT large architecture                                                                                                         |
|                   |                                                            | (see `details <https://github.com/getalp/Flaubert>`__)                                                                                |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
278
| Bart              | ``bart-large``                                             | | 24-layer, 1024-hidden, 16-heads, 406M parameters                                                                                    |
Sam Shleifer's avatar
Sam Shleifer committed
279
280
281
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/bart>`_)                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bart-large-mnli``                                        | | Adds a 2 layer classification head with 1 million parameters                                                                        |
Sam Shleifer's avatar
Sam Shleifer committed
282
283
284
285
|                   |                                                            | | bart-large base architecture with a classification head, finetuned on MNLI                                                          |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bart-large-cnn``                                         | | 12-layer, 1024-hidden, 16-heads, 406M parameters       (same as base)                                                               |
|                   |                                                            | | bart-large base architecture finetuned on cnn summarization task                                                                    |
Sam Shleifer's avatar
Sam Shleifer committed
286
287
288
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``mbart-large-en-ro``                                      | | 12-layer, 1024-hidden, 16-heads, 880M parameters                                                                                    |
|                   |                                                            | | bart-large architecture pretrained on cc25 multilingual data , finetuned on WMT english romanian translation.                       |
Sam Shleifer's avatar
Sam Shleifer committed
289
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
290
291
292
293
294
295
296
297
298
| DialoGPT          | ``DialoGPT-small``                                         | | 12-layer, 768-hidden, 12-heads, 124M parameters                                                                                     |
|                   |                                                            | | Trained on English text: 147M conversation-like exchanges extracted from Reddit.                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``DialoGPT-medium``                                        | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | Trained on English text: 147M conversation-like exchanges extracted from Reddit.                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``DialoGPT-large``                                         | | 36-layer, 1280-hidden, 20-heads, 774M parameters                                                                                    |
|                   |                                                            | | Trained on English text: 147M conversation-like exchanges extracted from Reddit.                                                    |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
299
300
301
302
303
| Reformer          | ``reformer-enwik8``                                        | | 12-layer, 1024-hidden, 8-heads, 149M parameters                                                                                     |
|                   |                                                            | | Trained on English Wikipedia data - enwik8.                                                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``reformer-crime-and-punishment``                          | | 6-layer, 256-hidden, 2-heads, 3M parameters                                                                                         |
|                   |                                                            | | Trained on English text: Crime and Punishment novel by Fyodor Dostoyevsky.                                                          |
Patrick von Platen's avatar
Patrick von Platen committed
304
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
305
| MarianMT          | ``Helsinki-NLP/opus-mt-{src}-{tgt}``                       | | 12-layer, 512-hidden, 8-heads, ~74M parameter Machine translation models. Parameter counts vary depending on vocab size.            |
306
|                   |                                                            | | (see `model list <https://huggingface.co/Helsinki-NLP>`_)                                                                           |
307
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
308
| Longformer        | ``longformer-base-4096``                                   | | 12-layer, 768-hidden, 12-heads, ~149M parameters                                                                                    |
Iz Beltagy's avatar
Iz Beltagy committed
309
310
|                   |                                                            | | Starting from RoBERTa-base checkpoint, trained on documents of max length 4,096                                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
311
|                   | ``longformer-large-4096``                                  | | 24-layer, 1024-hidden, 16-heads, ~435M parameters                                                                                   |
Iz Beltagy's avatar
Iz Beltagy committed
312
313
|                   |                                                            | | Starting from RoBERTa-large checkpoint, trained on documents of max length 4,096                                                    |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+