pretrained_models.rst 40.2 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
2
3
4
5
6
Pretrained models
================================================

Here is the full list of the currently provided pretrained models together with a short presentation of each model.


LysandreJik's avatar
Doc  
LysandreJik committed
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Architecture      | Shortcut name                                              | Details of the model                                                                                                                  |
+===================+============================================================+=======================================================================================================================================+
| BERT              | ``bert-base-uncased``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased``                                     | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased``                                        | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased``                                       | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-uncased``                         | | (Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                        |
|                   |                                                            | | Trained on lower-cased text in the top 102 languages with the largest Wikipedias                                                    |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-cased``                           | | (New, **recommended**) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                             |
|                   |                                                            | | Trained on cased text in the top 104 languages with the largest Wikipedias                                                          |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-chinese``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Chinese Simplified and Traditional text.                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-cased``                                 | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by Deepset.ai                                                                                          |
|                   |                                                            | (see `details on deepset.ai website <https://deepset.ai/german-bert>`__).                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking``                  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text using Whole-Word-Masking                                                                        |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking``                    | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text using Whole-Word-Masking                                                                              |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking-finetuned-squad``  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD                                                             |
LysandreJik's avatar
LysandreJik committed
47
|                   |                                                            | (see details of fine-tuning in the `example section <https://github.com/huggingface/transformers/tree/master/examples>`__).           |
LysandreJik's avatar
Doc  
LysandreJik committed
48
49
50
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking-finetuned-squad``    | | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                                    |
|                   |                                                            | | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD                                                               |
LysandreJik's avatar
LysandreJik committed
51
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
LysandreJik's avatar
Doc  
LysandreJik committed
52
53
54
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased-finetuned-mrpc``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | The ``bert-base-cased`` model fine-tuned on MRPC                                                                                    |
LysandreJik's avatar
LysandreJik committed
55
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
56
57
58
59
60
61
62
63
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-cased``                           | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by DBMDZ                                                                                               |
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-uncased``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on uncased German text by DBMDZ                                                                                             |
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
LysandreJik's avatar
Doc  
LysandreJik committed
64
65
66
67
68
69
70
71
72
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT               | ``openai-gpt``                                             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT English model                                                                                                            |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT-2             | ``gpt2``                                                   | | 12-layer, 768-hidden, 12-heads, 117M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT-2 English model                                                                                                          |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-medium``                                            | | 24-layer, 1024-hidden, 16-heads, 345M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Medium-sized GPT-2 English model                                                                                           |
73
74
75
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-large``                                             | | 36-layer, 1280-hidden, 20-heads, 774M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Large-sized GPT-2 English model                                                                                            |
Lysandre's avatar
Lysandre committed
76
77
78
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-xl``                                                | | 48-layer, 1600-hidden, 25-heads, 1558M parameters.                                                                                  |
|                   |                                                            | | OpenAI's XL-sized GPT-2 English model                                                                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
79
80
81
82
83
84
85
86
87
88
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Transformer-XL    | ``transfo-xl-wt103``                                       | | 18-layer, 1024-hidden, 16-heads, 257M parameters.                                                                                   |
|                   |                                                            | | English model trained on wikitext-103                                                                                               |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| XLNet             | ``xlnet-base-cased``                                       | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | XLNet English model                                                                                                                 |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlnet-large-cased``                                      | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | XLNet Large English model                                                                                                           |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
89
| XLM               | ``xlm-mlm-en-2048``                                        | | 12-layer, 2048-hidden, 16-heads                                                                                                     |
LysandreJik's avatar
Doc  
LysandreJik committed
90
91
|                   |                                                            | | XLM English model                                                                                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
92
|                   | ``xlm-mlm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
93
|                   |                                                            | | XLM English-German model trained on the concatenation of English and German wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
94
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
95
|                   | ``xlm-mlm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
96
|                   |                                                            | | XLM English-French model trained on the concatenation of English and French wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
97
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
98
|                   | ``xlm-mlm-enro-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
99
100
101
102
103
104
105
106
|                   |                                                            | | XLM English-Romanian Multi-language model                                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-xnli15-1024``                                    | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-tlm-xnli15-1024``                                | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
107
108
|                   | ``xlm-clm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
|                   |                                                            | | XLM English-French model trained with CLM (Causal Language Modeling) on the concatenation of English and French wikipedia           |
LysandreJik's avatar
Doc  
LysandreJik committed
109
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
110
|                   | ``xlm-clm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
111
|                   |                                                            | | XLM English-German model trained with CLM (Causal Language Modeling) on the concatenation of English and German wikipedia           |
LysandreJik's avatar
LysandreJik committed
112
113
114
115
116
117
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-17-1280``                                        | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 17 languages.                                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-100-1280``                                       | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 100 languages.                                                             |
LysandreJik's avatar
Doc  
LysandreJik committed
118
119
120
121
122
123
124
125
126
127
128
129
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| RoBERTa           | ``roberta-base``                                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | RoBERTa using the BERT-base architecture                                                                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large``                                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | RoBERTa using the BERT-large architecture                                                                                           |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-mnli``                                     | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned on `MNLI <http://www.nyu.edu/projects/bowman/multinli/>`__.                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
130
131
132
133
134
135
136
137
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-base-openai-detector``                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | ``roberta-base`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                             |
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-openai-detector``                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                            |
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
138
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
LysandreJik's avatar
LysandreJik committed
139
140
| DistilBERT        | ``distilbert-base-uncased``                                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint                                                   |
VictorSanh's avatar
VictorSanh committed
141
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
142
143
144
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-uncased-distilled-squad``                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint, with an additional linear layer.                 |
VictorSanh's avatar
VictorSanh committed
145
146
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
VictorSanh's avatar
VictorSanh committed
147
|                   | ``distilgpt2``                                             | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
VictorSanh's avatar
VictorSanh committed
148
149
|                   |                                                            | | The DistilGPT2 model distilled from the GPT2 model `gpt2` checkpoint.                                                               |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
VictorSanh's avatar
VictorSanh committed
150
151
152
153
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilroberta-base``                                     | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
|                   |                                                            | | The DistilRoBERTa model distilled from the RoBERTa model `roberta-base` checkpoint.                                                 |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
154
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
155
156
157
| CTRL              | ``ctrl``                                                   | | 48-layer, 1280-hidden, 16-heads, 1.6B parameters                                                                                    |
|                   |                                                            | | Salesforce's Large-sized CTRL English model                                                                                         |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
158
159
160
161
| CamemBERT         | ``camembert-base``                                         | | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                                     |
|                   |                                                            | | CamemBERT using the BERT-base architecture                                                                                          |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/camembert>`__)                                                 |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
| ALBERT            | ``albert-base-v1``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model                                                                                                                   |
|                   |                                                            | (see `details <https://github.com/google-research/google-research/tree/master/albert>`__)                                             |
+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``albert-large-v1``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model                                                                                                                  |
|                   |                                                            | (see `details <https://github.com/google-research/google-research/tree/master/albert>`__)                                             |
+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``albert-xlarge-v1``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model                                                                                                                 |
|                   |                                                            | (see `details <https://github.com/google-research/google-research/tree/master/albert>`__)                                             |
+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``albert-xxlarge-v1``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model                                                                                                                |
|                   |                                                            | (see `details <https://github.com/google-research/google-research/tree/master/albert>`__)                                             |
+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``albert-base-v2``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model with no dropout, additional training data and longer training                                                     |
|                   |                                                            | (see `details <https://github.com/google-research/google-research/tree/master/albert>`__)                                             |
+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``albert-large-v2``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model with no dropout, additional training data and longer training                                                    |
|                   |                                                            | (see `details <https://github.com/google-research/google-research/tree/master/albert>`__)                                             |
+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``albert-xlarge-v2``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model with no dropout, additional training data and longer training                                                   |
|                   |                                                            | (see `details <https://github.com/google-research/google-research/tree/master/albert>`__)                                             |
+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``albert-xxlarge-v2``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model with no dropout, additional training data and longer training                                                  |
|                   |                                                            | (see `details <https://github.com/google-research/google-research/tree/master/albert>`__)                                             |
+--------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+

195

196
.. <https://huggingface.co/transformers/examples.html>`__