pretrained_models.rst 77.9 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
2
3
4
5
Pretrained models
================================================

Here is the full list of the currently provided pretrained models together with a short presentation of each model.

6
For a list that includes community-uploaded models, refer to `https://huggingface.co/models <https://huggingface.co/models>`__.
thomwolf's avatar
thomwolf committed
7

LysandreJik's avatar
Doc  
LysandreJik committed
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Architecture      | Shortcut name                                              | Details of the model                                                                                                                  |
+===================+============================================================+=======================================================================================================================================+
| BERT              | ``bert-base-uncased``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased``                                     | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased``                                        | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased``                                       | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-uncased``                         | | (Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                        |
|                   |                                                            | | Trained on lower-cased text in the top 102 languages with the largest Wikipedias                                                    |
25
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
26
27
28
29
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-cased``                           | | (New, **recommended**) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                             |
|                   |                                                            | | Trained on cased text in the top 104 languages with the largest Wikipedias                                                          |
30
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
31
32
33
34
35
36
37
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-chinese``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Chinese Simplified and Traditional text.                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-cased``                                 | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by Deepset.ai                                                                                          |
38
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
39
40
41
42
|                   |                                                            | (see `details on deepset.ai website <https://deepset.ai/german-bert>`__).                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking``                  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text using Whole-Word-Masking                                                                        |
43
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
44
45
46
47
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking``                    | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text using Whole-Word-Masking                                                                              |
48
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
49
50
51
52
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking-finetuned-squad``  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD                                                             |
53
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
LysandreJik committed
54
|                   |                                                            | (see details of fine-tuning in the `example section <https://github.com/huggingface/transformers/tree/master/examples>`__).           |
LysandreJik's avatar
Doc  
LysandreJik committed
55
56
57
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking-finetuned-squad``    | | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                                    |
|                   |                                                            | | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD                                                               |
58
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
LysandreJik committed
59
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
LysandreJik's avatar
Doc  
LysandreJik committed
60
61
62
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased-finetuned-mrpc``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | The ``bert-base-cased`` model fine-tuned on MRPC                                                                                    |
63
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
LysandreJik committed
64
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
65
66
67
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-cased``                           | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by DBMDZ                                                                                               |
68
|                   |                                                            |                                                                                                                                       |
69
70
71
72
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-uncased``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on uncased German text by DBMDZ                                                                                             |
73
|                   |                                                            |                                                                                                                                       |
74
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
75
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
76
|                   | ``cl-tohoku/bert-base-japanese``                           | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
77
78
79
|                   |                                                            | | Trained on Japanese text. Text is tokenized with MeCab and WordPiece and this requires some extra dependencies,                     |
|                   |                                                            | | `fugashi <https://github.com/polm/fugashi>`__ which is a wrapper around `MeCab <https://taku910.github.io/mecab/>`__.               |
|                   |                                                            | | Use ``pip install transformers["ja"]`` (or ``pip install -e .["ja"]`` if you install from source) to install them.                  |
80
|                   |                                                            |                                                                                                                                       |
Julien Chaumond's avatar
Julien Chaumond committed
81
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
82
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
83
|                   | ``cl-tohoku/bert-base-japanese-whole-word-masking``        | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
84
85
86
|                   |                                                            | | Trained on Japanese text. Text is tokenized with MeCab and WordPiece and this requires some extra dependencies,                     |
|                   |                                                            | | `fugashi <https://github.com/polm/fugashi>`__ which is a wrapper around `MeCab <https://taku910.github.io/mecab/>`__.               |
|                   |                                                            | | Use ``pip install transformers["ja"]`` (or ``pip install -e .["ja"]`` if you install from source) to install them.                  |
87
|                   |                                                            |                                                                                                                                       |
Julien Chaumond's avatar
Julien Chaumond committed
88
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
89
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
90
|                   | ``cl-tohoku/bert-base-japanese-char``                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
91
|                   |                                                            | | Trained on Japanese text. Text is tokenized into characters.                                                                        |
92
|                   |                                                            |                                                                                                                                       |
Julien Chaumond's avatar
Julien Chaumond committed
93
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
94
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
95
|                   | ``cl-tohoku/bert-base-japanese-char-whole-word-masking``   | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
96
|                   |                                                            | | Trained on Japanese text using Whole-Word-Masking. Text is tokenized into characters.                                               |
97
|                   |                                                            |                                                                                                                                       |
Julien Chaumond's avatar
Julien Chaumond committed
98
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
99
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
100
|                   | ``TurkuNLP/bert-base-finnish-cased-v1``                    | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
101
|                   |                                                            | | Trained on cased Finnish text.                                                                                                      |
102
|                   |                                                            |                                                                                                                                       |
103
104
|                   |                                                            | (see `details on turkunlp.org <http://turkunlp.org/FinBERT/>`__).                                                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
105
|                   | ``TurkuNLP/bert-base-finnish-uncased-v1``                  | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
106
|                   |                                                            | | Trained on uncased Finnish text.                                                                                                    |
107
|                   |                                                            |                                                                                                                                       |
108
|                   |                                                            | (see `details on turkunlp.org <http://turkunlp.org/FinBERT/>`__).                                                                     |
109
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
110
|                   | ``wietsedv/bert-base-dutch-cased``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
111
|                   |                                                            | | Trained on cased Dutch text.                                                                                                        |
112
|                   |                                                            |                                                                                                                                       |
113
|                   |                                                            | (see `details on wietsedv repository <https://github.com/wietsedv/bertje/>`__).                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
114
115
116
117
118
119
120
121
122
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT               | ``openai-gpt``                                             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT English model                                                                                                            |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT-2             | ``gpt2``                                                   | | 12-layer, 768-hidden, 12-heads, 117M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT-2 English model                                                                                                          |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-medium``                                            | | 24-layer, 1024-hidden, 16-heads, 345M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Medium-sized GPT-2 English model                                                                                           |
123
124
125
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-large``                                             | | 36-layer, 1280-hidden, 20-heads, 774M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Large-sized GPT-2 English model                                                                                            |
Lysandre's avatar
Lysandre committed
126
127
128
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-xl``                                                | | 48-layer, 1600-hidden, 25-heads, 1558M parameters.                                                                                  |
|                   |                                                            | | OpenAI's XL-sized GPT-2 English model                                                                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
129
130
131
132
133
134
135
136
137
138
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Transformer-XL    | ``transfo-xl-wt103``                                       | | 18-layer, 1024-hidden, 16-heads, 257M parameters.                                                                                   |
|                   |                                                            | | English model trained on wikitext-103                                                                                               |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| XLNet             | ``xlnet-base-cased``                                       | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | XLNet English model                                                                                                                 |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlnet-large-cased``                                      | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | XLNet Large English model                                                                                                           |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
139
| XLM               | ``xlm-mlm-en-2048``                                        | | 12-layer, 2048-hidden, 16-heads                                                                                                     |
LysandreJik's avatar
Doc  
LysandreJik committed
140
141
|                   |                                                            | | XLM English model                                                                                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
142
|                   | ``xlm-mlm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
143
|                   |                                                            | | XLM English-German model trained on the concatenation of English and German wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
144
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
145
|                   | ``xlm-mlm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
146
|                   |                                                            | | XLM English-French model trained on the concatenation of English and French wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
147
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
148
|                   | ``xlm-mlm-enro-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
149
150
151
152
153
154
155
156
|                   |                                                            | | XLM English-Romanian Multi-language model                                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-xnli15-1024``                                    | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-tlm-xnli15-1024``                                | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
157
158
|                   | ``xlm-clm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
|                   |                                                            | | XLM English-French model trained with CLM (Causal Language Modeling) on the concatenation of English and French wikipedia           |
LysandreJik's avatar
Doc  
LysandreJik committed
159
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
160
|                   | ``xlm-clm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
161
|                   |                                                            | | XLM English-German model trained with CLM (Causal Language Modeling) on the concatenation of English and German wikipedia           |
LysandreJik's avatar
LysandreJik committed
162
163
164
165
166
167
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-17-1280``                                        | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 17 languages.                                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-100-1280``                                       | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 100 languages.                                                             |
LysandreJik's avatar
Doc  
LysandreJik committed
168
169
170
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| RoBERTa           | ``roberta-base``                                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | RoBERTa using the BERT-base architecture                                                                                            |
171
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
172
173
174
175
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large``                                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | RoBERTa using the BERT-large architecture                                                                                           |
176
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
177
178
179
180
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-mnli``                                     | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned on `MNLI <http://www.nyu.edu/projects/bowman/multinli/>`__.                                            |
181
|                   |                                                            |                                                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
182
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
183
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
184
185
|                   | ``distilroberta-base``                                     | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
|                   |                                                            | | The DistilRoBERTa model distilled from the RoBERTa model `roberta-base` checkpoint.                                                 |
186
|                   |                                                            |                                                                                                                                       |
187
188
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
189
190
|                   | ``roberta-base-openai-detector``                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | ``roberta-base`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                             |
191
|                   |                                                            |                                                                                                                                       |
192
193
194
195
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-openai-detector``                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                            |
196
|                   |                                                            |                                                                                                                                       |
197
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
198
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
LysandreJik's avatar
LysandreJik committed
199
200
| DistilBERT        | ``distilbert-base-uncased``                                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint                                                   |
201
|                   |                                                            |                                                                                                                                       |
VictorSanh's avatar
VictorSanh committed
202
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
203
204
205
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-uncased-distilled-squad``                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint, with an additional linear layer.                 |
206
|                   |                                                            |                                                                                                                                       |
VictorSanh's avatar
VictorSanh committed
207
208
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
209
210
|                   | ``distilbert-base-cased``                                  | | 6-layer, 768-hidden, 12-heads, 65M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-cased` checkpoint                                                     |
211
|                   |                                                            |                                                                                                                                       |
212
213
214
215
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-cased-distilled-squad``                  | | 6-layer, 768-hidden, 12-heads, 65M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-cased` checkpoint, with an additional question answering layer.       |
216
|                   |                                                            |                                                                                                                                       |
217
218
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
VictorSanh's avatar
VictorSanh committed
219
|                   | ``distilgpt2``                                             | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
VictorSanh's avatar
VictorSanh committed
220
|                   |                                                            | | The DistilGPT2 model distilled from the GPT2 model `gpt2` checkpoint.                                                               |
221
|                   |                                                            |                                                                                                                                       |
VictorSanh's avatar
VictorSanh committed
222
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
223
224
225
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-german-cased``                           | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The German DistilBERT model distilled from the German DBMDZ BERT model `bert-base-german-dbmdz-cased` checkpoint.                   |
226
|                   |                                                            |                                                                                                                                       |
VictorSanh's avatar
VictorSanh committed
227
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
VictorSanh's avatar
VictorSanh committed
228
229
230
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-multilingual-cased``                     | | 6-layer, 768-hidden, 12-heads, 134M parameters                                                                                      |
|                   |                                                            | | The multilingual DistilBERT model distilled from the Multilingual BERT model `bert-base-multilingual-cased` checkpoint.             |
231
|                   |                                                            |                                                                                                                                       |
VictorSanh's avatar
VictorSanh committed
232
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
233
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
234
235
236
| CTRL              | ``ctrl``                                                   | | 48-layer, 1280-hidden, 16-heads, 1.6B parameters                                                                                    |
|                   |                                                            | | Salesforce's Large-sized CTRL English model                                                                                         |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
237
238
| CamemBERT         | ``camembert-base``                                         | | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                                     |
|                   |                                                            | | CamemBERT using the BERT-base architecture                                                                                          |
239
|                   |                                                            |                                                                                                                                       |
240
241
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/camembert>`__)                                                 |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
242
243
| ALBERT            | ``albert-base-v1``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model                                                                                                                   |
244
|                   |                                                            |                                                                                                                                       |
245
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
246
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
247
248
|                   | ``albert-large-v1``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model                                                                                                                  |
249
|                   |                                                            |                                                                                                                                       |
250
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
251
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
252
253
|                   | ``albert-xlarge-v1``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model                                                                                                                 |
254
|                   |                                                            |                                                                                                                                       |
255
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
256
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
257
258
|                   | ``albert-xxlarge-v1``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model                                                                                                                |
259
|                   |                                                            |                                                                                                                                       |
260
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
261
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
262
263
|                   | ``albert-base-v2``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model with no dropout, additional training data and longer training                                                     |
264
|                   |                                                            |                                                                                                                                       |
265
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
266
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
267
268
|                   | ``albert-large-v2``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model with no dropout, additional training data and longer training                                                    |
269
|                   |                                                            |                                                                                                                                       |
270
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
271
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
272
273
|                   | ``albert-xlarge-v2``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model with no dropout, additional training data and longer training                                                   |
274
|                   |                                                            |                                                                                                                                       |
275
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
276
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
277
278
|                   | ``albert-xxlarge-v2``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model with no dropout, additional training data and longer training                                                  |
279
|                   |                                                            |                                                                                                                                       |
280
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
281
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
282
283
| T5                | ``t5-small``                                               | | ~60M parameters with 6-layers, 512-hidden-state, 2048 feed-forward hidden-state, 8-heads,                                           |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
284
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
285
286
|                   | ``t5-base``                                                | | ~220M parameters with 12-layers, 768-hidden-state, 3072 feed-forward hidden-state, 12-heads,                                        |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
287
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
288
289
|                   | ``t5-large``                                               | | ~770M parameters with 24-layers, 1024-hidden-state, 4096 feed-forward hidden-state, 16-heads,                                       |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
290
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
291
292
|                   | ``t5-3B``                                                  | | ~2.8B parameters with 24-layers, 1024-hidden-state, 16384 feed-forward hidden-state, 32-heads,                                      |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
293
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
294
295
|                   | ``t5-11B``                                                 | | ~11B parameters with 24-layers, 1024-hidden-state, 65536 feed-forward hidden-state, 128-heads,                                      |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
296
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
297
298
299
300
301
302
| XLM-RoBERTa       | ``xlm-roberta-base``                                       | | ~125M parameters with 12-layers, 768-hidden-state, 3072 feed-forward hidden-state, 8-heads,                                         |
|                   |                                                            | | Trained on on 2.5 TB of newly created clean CommonCrawl data in 100 languages                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-roberta-large``                                      | | ~355M parameters with 24-layers, 1027-hidden-state, 4096 feed-forward hidden-state, 16-heads,                                       |
|                   |                                                            | | Trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages                                                          |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
303
| FlauBERT          | ``flaubert/flaubert_small_cased``                          | | 6-layer, 512-hidden, 8-heads, 54M parameters                                                                                        |
Lysandre's avatar
Lysandre committed
304
|                   |                                                            | | FlauBERT small architecture                                                                                                         |
305
|                   |                                                            |                                                                                                                                       |
Lysandre's avatar
Lysandre committed
306
307
|                   |                                                            | (see `details <https://github.com/getalp/Flaubert>`__)                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
308
|                   | ``flaubert/flaubert_base_uncased``                         | | 12-layer, 768-hidden, 12-heads, 137M parameters                                                                                     |
Lysandre's avatar
Lysandre committed
309
|                   |                                                            | | FlauBERT base architecture with uncased vocabulary                                                                                  |
310
|                   |                                                            |                                                                                                                                       |
Lysandre's avatar
Lysandre committed
311
312
|                   |                                                            | (see `details <https://github.com/getalp/Flaubert>`__)                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
313
|                   | ``flaubert/flaubert_base_cased``                           | | 12-layer, 768-hidden, 12-heads, 138M parameters                                                                                     |
Lysandre's avatar
Lysandre committed
314
|                   |                                                            | | FlauBERT base architecture with cased vocabulary                                                                                    |
315
|                   |                                                            |                                                                                                                                       |
Lysandre's avatar
Lysandre committed
316
317
|                   |                                                            | (see `details <https://github.com/getalp/Flaubert>`__)                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
318
|                   | ``flaubert/flaubert_large_cased``                          | | 24-layer, 1024-hidden, 16-heads, 373M parameters                                                                                    |
Lysandre's avatar
Lysandre committed
319
|                   |                                                            | | FlauBERT large architecture                                                                                                         |
320
|                   |                                                            |                                                                                                                                       |
Lysandre's avatar
Lysandre committed
321
322
|                   |                                                            | (see `details <https://github.com/getalp/Flaubert>`__)                                                                                |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
323
| Bart              | ``facebook/bart-large``                                    | | 24-layer, 1024-hidden, 16-heads, 406M parameters                                                                                    |
324
|                   |                                                            |                                                                                                                                       |
Sam Shleifer's avatar
Sam Shleifer committed
325
326
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/bart>`_)                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Sam Shleifer's avatar
Sam Shleifer committed
327
328
|                   | ``facebook/bart-base``                                     | | 12-layer, 768-hidden, 16-heads, 139M parameters                                                                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
329
|                   | ``facebook/bart-large-mnli``                               | | Adds a 2 layer classification head with 1 million parameters                                                                        |
Sam Shleifer's avatar
Sam Shleifer committed
330
331
|                   |                                                            | | bart-large base architecture with a classification head, finetuned on MNLI                                                          |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
332
|                   | ``facebook/bart-large-cnn``                                | | 12-layer, 1024-hidden, 16-heads, 406M parameters       (same as base)                                                               |
Sam Shleifer's avatar
Sam Shleifer committed
333
|                   |                                                            | | bart-large base architecture finetuned on cnn summarization task                                                                    |
Sam Shleifer's avatar
Sam Shleifer committed
334
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
335
336
337
338
339
340
341
342
343
| DialoGPT          | ``DialoGPT-small``                                         | | 12-layer, 768-hidden, 12-heads, 124M parameters                                                                                     |
|                   |                                                            | | Trained on English text: 147M conversation-like exchanges extracted from Reddit.                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``DialoGPT-medium``                                        | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | Trained on English text: 147M conversation-like exchanges extracted from Reddit.                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``DialoGPT-large``                                         | | 36-layer, 1280-hidden, 20-heads, 774M parameters                                                                                    |
|                   |                                                            | | Trained on English text: 147M conversation-like exchanges extracted from Reddit.                                                    |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
344
345
346
347
348
| Reformer          | ``reformer-enwik8``                                        | | 12-layer, 1024-hidden, 8-heads, 149M parameters                                                                                     |
|                   |                                                            | | Trained on English Wikipedia data - enwik8.                                                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``reformer-crime-and-punishment``                          | | 6-layer, 256-hidden, 2-heads, 3M parameters                                                                                         |
|                   |                                                            | | Trained on English text: Crime and Punishment novel by Fyodor Dostoyevsky.                                                          |
Patrick von Platen's avatar
Patrick von Platen committed
349
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
350
| MarianMT          | ``Helsinki-NLP/opus-mt-{src}-{tgt}``                       | | 12-layer, 512-hidden, 8-heads, ~74M parameter Machine translation models. Parameter counts vary depending on vocab size.            |
351
|                   |                                                            | | (see `model list <https://huggingface.co/Helsinki-NLP>`_)                                                                           |
352
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
353
354
| Pegasus           | ``google/pegasus-{dataset}``                               | | 16-layer, 1024-hidden, 16-heads, ~568M parameter, 2.2 GB for summary. `model list <https://huggingface.co/models?search=pegasus>`__ |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
355
| Longformer        | ``allenai/longformer-base-4096``                           | | 12-layer, 768-hidden, 12-heads, ~149M parameters                                                                                    |
Iz Beltagy's avatar
Iz Beltagy committed
356
357
|                   |                                                            | | Starting from RoBERTa-base checkpoint, trained on documents of max length 4,096                                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
358
|                   | ``allenai/longformer-large-4096``                          | | 24-layer, 1024-hidden, 16-heads, ~435M parameters                                                                                   |
Iz Beltagy's avatar
Iz Beltagy committed
359
360
|                   |                                                            | | Starting from RoBERTa-large checkpoint, trained on documents of max length 4,096                                                    |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
361
362
363
364
365
| MBart             | ``facebook/mbart-large-cc25``                              | | 24-layer, 1024-hidden, 16-heads, 610M parameters                                                                                    |
|                   |                                                            | | mBART (bart-large architecture) model trained on 25 languages' monolingual corpus                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``facebook/mbart-large-en-ro``                             | | 24-layer, 1024-hidden, 16-heads, 610M parameters                                                                                    |
|                   |                                                            | | mbart-large-cc25 model finetuned on WMT english romanian translation.                                                               |
366
367
368
369
370
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Lxmert            | ``lxmert-base-uncased``                                    | | 9-language layers, 9-relationship layers, and 12-cross-modality layers                                                              |
|                   |                                                            | | 768-hidden, 12-heads (for each layer) ~ 228M parameters                                                                             |
|                   |                                                            | | Starting from lxmert-base checkpoint, trained on over 9 million image-text couplets from COCO, VisualGenome, GQA, VQA               |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+