pretrained_models.rst 52.1 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
2
3
4
5
Pretrained models
================================================

Here is the full list of the currently provided pretrained models together with a short presentation of each model.

6
For a list that includes community-uploaded models, refer to `https://huggingface.co/models <https://huggingface.co/models>`__.
thomwolf's avatar
thomwolf committed
7

LysandreJik's avatar
Doc  
LysandreJik committed
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Architecture      | Shortcut name                                              | Details of the model                                                                                                                  |
+===================+============================================================+=======================================================================================================================================+
| BERT              | ``bert-base-uncased``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased``                                     | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased``                                        | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased``                                       | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-uncased``                         | | (Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                        |
|                   |                                                            | | Trained on lower-cased text in the top 102 languages with the largest Wikipedias                                                    |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-cased``                           | | (New, **recommended**) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                             |
|                   |                                                            | | Trained on cased text in the top 104 languages with the largest Wikipedias                                                          |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-chinese``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Chinese Simplified and Traditional text.                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-cased``                                 | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by Deepset.ai                                                                                          |
|                   |                                                            | (see `details on deepset.ai website <https://deepset.ai/german-bert>`__).                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking``                  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text using Whole-Word-Masking                                                                        |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking``                    | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text using Whole-Word-Masking                                                                              |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking-finetuned-squad``  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD                                                             |
LysandreJik's avatar
LysandreJik committed
48
|                   |                                                            | (see details of fine-tuning in the `example section <https://github.com/huggingface/transformers/tree/master/examples>`__).           |
LysandreJik's avatar
Doc  
LysandreJik committed
49
50
51
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking-finetuned-squad``    | | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                                    |
|                   |                                                            | | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD                                                               |
LysandreJik's avatar
LysandreJik committed
52
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
LysandreJik's avatar
Doc  
LysandreJik committed
53
54
55
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased-finetuned-mrpc``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | The ``bert-base-cased`` model fine-tuned on MRPC                                                                                    |
LysandreJik's avatar
LysandreJik committed
56
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
57
58
59
60
61
62
63
64
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-cased``                           | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by DBMDZ                                                                                               |
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-uncased``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on uncased German text by DBMDZ                                                                                             |
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
65
66
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese``                                     | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
67
68
69
|                   |                                                            | | Trained on Japanese text. Text is tokenized with MeCab and WordPiece.                                                               |
|                   |                                                            | | `MeCab <https://taku910.github.io/mecab/>`__ is required for tokenization.                                                          |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
70
71
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-whole-word-masking``                  | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
72
73
74
|                   |                                                            | | Trained on Japanese text using Whole-Word-Masking. Text is tokenized with MeCab and WordPiece.                                      |
|                   |                                                            | | `MeCab <https://taku910.github.io/mecab/>`__ is required for tokenization.                                                          |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
75
76
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-char``                                | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
77
78
|                   |                                                            | | Trained on Japanese text. Text is tokenized into characters.                                                                        |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
79
80
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-char-whole-word-masking``             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
81
82
|                   |                                                            | | Trained on Japanese text using Whole-Word-Masking. Text is tokenized into characters.                                               |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
83
84
85
86
87
88
89
90
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-finnish-cased-v1``                             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Finnish text.                                                                                                      |
|                   |                                                            | (see `details on turkunlp.org <http://turkunlp.org/FinBERT/>`__).                                                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-finnish-uncased-v1``                           | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on uncased Finnish text.                                                                                                    |
|                   |                                                            | (see `details on turkunlp.org <http://turkunlp.org/FinBERT/>`__).                                                                     |
LysandreJik's avatar
Doc  
LysandreJik committed
91
92
93
94
95
96
97
98
99
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT               | ``openai-gpt``                                             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT English model                                                                                                            |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT-2             | ``gpt2``                                                   | | 12-layer, 768-hidden, 12-heads, 117M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT-2 English model                                                                                                          |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-medium``                                            | | 24-layer, 1024-hidden, 16-heads, 345M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Medium-sized GPT-2 English model                                                                                           |
100
101
102
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-large``                                             | | 36-layer, 1280-hidden, 20-heads, 774M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Large-sized GPT-2 English model                                                                                            |
Lysandre's avatar
Lysandre committed
103
104
105
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-xl``                                                | | 48-layer, 1600-hidden, 25-heads, 1558M parameters.                                                                                  |
|                   |                                                            | | OpenAI's XL-sized GPT-2 English model                                                                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
106
107
108
109
110
111
112
113
114
115
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Transformer-XL    | ``transfo-xl-wt103``                                       | | 18-layer, 1024-hidden, 16-heads, 257M parameters.                                                                                   |
|                   |                                                            | | English model trained on wikitext-103                                                                                               |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| XLNet             | ``xlnet-base-cased``                                       | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | XLNet English model                                                                                                                 |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlnet-large-cased``                                      | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | XLNet Large English model                                                                                                           |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
116
| XLM               | ``xlm-mlm-en-2048``                                        | | 12-layer, 2048-hidden, 16-heads                                                                                                     |
LysandreJik's avatar
Doc  
LysandreJik committed
117
118
|                   |                                                            | | XLM English model                                                                                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
119
|                   | ``xlm-mlm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
120
|                   |                                                            | | XLM English-German model trained on the concatenation of English and German wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
121
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
122
|                   | ``xlm-mlm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
123
|                   |                                                            | | XLM English-French model trained on the concatenation of English and French wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
124
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
125
|                   | ``xlm-mlm-enro-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
126
127
128
129
130
131
132
133
|                   |                                                            | | XLM English-Romanian Multi-language model                                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-xnli15-1024``                                    | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-tlm-xnli15-1024``                                | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
134
135
|                   | ``xlm-clm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
|                   |                                                            | | XLM English-French model trained with CLM (Causal Language Modeling) on the concatenation of English and French wikipedia           |
LysandreJik's avatar
Doc  
LysandreJik committed
136
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
137
|                   | ``xlm-clm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
138
|                   |                                                            | | XLM English-German model trained with CLM (Causal Language Modeling) on the concatenation of English and German wikipedia           |
LysandreJik's avatar
LysandreJik committed
139
140
141
142
143
144
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-17-1280``                                        | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 17 languages.                                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-100-1280``                                       | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 100 languages.                                                             |
LysandreJik's avatar
Doc  
LysandreJik committed
145
146
147
148
149
150
151
152
153
154
155
156
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| RoBERTa           | ``roberta-base``                                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | RoBERTa using the BERT-base architecture                                                                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large``                                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | RoBERTa using the BERT-large architecture                                                                                           |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-mnli``                                     | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned on `MNLI <http://www.nyu.edu/projects/bowman/multinli/>`__.                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
157
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
158
159
160
161
|                   | ``distilroberta-base``                                     | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
|                   |                                                            | | The DistilRoBERTa model distilled from the RoBERTa model `roberta-base` checkpoint.                                                 |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
162
163
164
165
166
167
168
|                   | ``roberta-base-openai-detector``                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | ``roberta-base`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                             |
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-openai-detector``                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                            |
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
169
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
LysandreJik's avatar
LysandreJik committed
170
171
| DistilBERT        | ``distilbert-base-uncased``                                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint                                                   |
VictorSanh's avatar
VictorSanh committed
172
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
173
174
175
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-uncased-distilled-squad``                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint, with an additional linear layer.                 |
VictorSanh's avatar
VictorSanh committed
176
177
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
VictorSanh's avatar
VictorSanh committed
178
|                   | ``distilgpt2``                                             | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
VictorSanh's avatar
VictorSanh committed
179
180
|                   |                                                            | | The DistilGPT2 model distilled from the GPT2 model `gpt2` checkpoint.                                                               |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
181
182
183
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-german-cased``                           | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The German DistilBERT model distilled from the German DBMDZ BERT model `bert-base-german-dbmdz-cased` checkpoint.                   |
VictorSanh's avatar
VictorSanh committed
184
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
VictorSanh's avatar
VictorSanh committed
185
186
187
188
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-multilingual-cased``                     | | 6-layer, 768-hidden, 12-heads, 134M parameters                                                                                      |
|                   |                                                            | | The multilingual DistilBERT model distilled from the Multilingual BERT model `bert-base-multilingual-cased` checkpoint.             |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
189
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
190
191
192
| CTRL              | ``ctrl``                                                   | | 48-layer, 1280-hidden, 16-heads, 1.6B parameters                                                                                    |
|                   |                                                            | | Salesforce's Large-sized CTRL English model                                                                                         |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
193
194
195
196
| CamemBERT         | ``camembert-base``                                         | | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                                     |
|                   |                                                            | | CamemBERT using the BERT-base architecture                                                                                          |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/camembert>`__)                                                 |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
197
198
| ALBERT            | ``albert-base-v1``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model                                                                                                                   |
199
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
200
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
201
202
|                   | ``albert-large-v1``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model                                                                                                                  |
203
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
204
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
205
206
|                   | ``albert-xlarge-v1``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model                                                                                                                 |
207
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
208
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
209
210
|                   | ``albert-xxlarge-v1``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model                                                                                                                |
211
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
212
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
213
214
|                   | ``albert-base-v2``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model with no dropout, additional training data and longer training                                                     |
215
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
216
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
217
218
|                   | ``albert-large-v2``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model with no dropout, additional training data and longer training                                                    |
219
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
220
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
221
222
|                   | ``albert-xlarge-v2``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model with no dropout, additional training data and longer training                                                   |
223
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
224
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
225
226
|                   | ``albert-xxlarge-v2``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model with no dropout, additional training data and longer training                                                  |
227
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
228
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
229
230
| T5                | ``t5-small``                                               | | ~60M parameters with 6-layers, 512-hidden-state, 2048 feed-forward hidden-state, 8-heads,                                           |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
231
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
232
233
|                   | ``t5-base``                                                | | ~220M parameters with 12-layers, 768-hidden-state, 3072 feed-forward hidden-state, 12-heads,                                        |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
234
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
235
236
|                   | ``t5-large``                                               | | ~770M parameters with 24-layers, 1024-hidden-state, 4096 feed-forward hidden-state, 16-heads,                                       |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
237
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
238
239
|                   | ``t5-3B``                                                  | | ~2.8B parameters with 24-layers, 1024-hidden-state, 16384 feed-forward hidden-state, 32-heads,                                      |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
240
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
241
242
|                   | ``t5-11B``                                                 | | ~11B parameters with 24-layers, 1024-hidden-state, 65536 feed-forward hidden-state, 128-heads,                                      |
|                   |                                                            | | Trained on English text: the Colossal Clean Crawled Corpus (C4)                                                                     |
243
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
244
245
246
247
248
249
| XLM-RoBERTa       | ``xlm-roberta-base``                                       | | ~125M parameters with 12-layers, 768-hidden-state, 3072 feed-forward hidden-state, 8-heads,                                         |
|                   |                                                            | | Trained on on 2.5 TB of newly created clean CommonCrawl data in 100 languages                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-roberta-large``                                      | | ~355M parameters with 24-layers, 1027-hidden-state, 4096 feed-forward hidden-state, 16-heads,                                       |
|                   |                                                            | | Trained on 2.5 TB of newly created clean CommonCrawl data in 100 languages                                                          |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
250
251


252
.. <https://huggingface.co/transformers/examples.html>`__