"ts/nni_manager/vscode:/vscode.git/clone" did not exist on "14d2966b9e91ae16dcc39de8f41017a75cec8ff9"
pretrained_models.rst 50.1 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
2
3
4
5
6
Pretrained models
================================================

Here is the full list of the currently provided pretrained models together with a short presentation of each model.


LysandreJik's avatar
Doc  
LysandreJik committed
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Architecture      | Shortcut name                                              | Details of the model                                                                                                                  |
+===================+============================================================+=======================================================================================================================================+
| BERT              | ``bert-base-uncased``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased``                                     | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased``                                        | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased``                                       | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-uncased``                         | | (Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                        |
|                   |                                                            | | Trained on lower-cased text in the top 102 languages with the largest Wikipedias                                                    |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-cased``                           | | (New, **recommended**) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                             |
|                   |                                                            | | Trained on cased text in the top 104 languages with the largest Wikipedias                                                          |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-chinese``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Chinese Simplified and Traditional text.                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-cased``                                 | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by Deepset.ai                                                                                          |
|                   |                                                            | (see `details on deepset.ai website <https://deepset.ai/german-bert>`__).                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking``                  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text using Whole-Word-Masking                                                                        |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking``                    | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text using Whole-Word-Masking                                                                              |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking-finetuned-squad``  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD                                                             |
LysandreJik's avatar
LysandreJik committed
47
|                   |                                                            | (see details of fine-tuning in the `example section <https://github.com/huggingface/transformers/tree/master/examples>`__).           |
LysandreJik's avatar
Doc  
LysandreJik committed
48
49
50
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking-finetuned-squad``    | | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                                    |
|                   |                                                            | | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD                                                               |
LysandreJik's avatar
LysandreJik committed
51
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
LysandreJik's avatar
Doc  
LysandreJik committed
52
53
54
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased-finetuned-mrpc``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | The ``bert-base-cased`` model fine-tuned on MRPC                                                                                    |
LysandreJik's avatar
LysandreJik committed
55
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/transformers/examples.html>`__)                           |
56
57
58
59
60
61
62
63
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-cased``                           | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by DBMDZ                                                                                               |
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-dbmdz-uncased``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on uncased German text by DBMDZ                                                                                             |
|                   |                                                            | (see `details on dbmdz repository <https://github.com/dbmdz/german-bert>`__).                                                         |
64
65
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese``                                     | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
66
67
68
|                   |                                                            | | Trained on Japanese text. Text is tokenized with MeCab and WordPiece.                                                               |
|                   |                                                            | | `MeCab <https://taku910.github.io/mecab/>`__ is required for tokenization.                                                          |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
69
70
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-whole-word-masking``                  | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
71
72
73
|                   |                                                            | | Trained on Japanese text using Whole-Word-Masking. Text is tokenized with MeCab and WordPiece.                                      |
|                   |                                                            | | `MeCab <https://taku910.github.io/mecab/>`__ is required for tokenization.                                                          |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
74
75
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-char``                                | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
76
77
|                   |                                                            | | Trained on Japanese text. Text is tokenized into characters.                                                                        |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
78
79
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-japanese-char-whole-word-masking``             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
Julien Chaumond's avatar
Julien Chaumond committed
80
81
|                   |                                                            | | Trained on Japanese text using Whole-Word-Masking. Text is tokenized into characters.                                               |
|                   |                                                            | (see `details on cl-tohoku repository <https://github.com/cl-tohoku/bert-japanese>`__).                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
82
83
84
85
86
87
88
89
90
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT               | ``openai-gpt``                                             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT English model                                                                                                            |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT-2             | ``gpt2``                                                   | | 12-layer, 768-hidden, 12-heads, 117M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT-2 English model                                                                                                          |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-medium``                                            | | 24-layer, 1024-hidden, 16-heads, 345M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Medium-sized GPT-2 English model                                                                                           |
91
92
93
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-large``                                             | | 36-layer, 1280-hidden, 20-heads, 774M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Large-sized GPT-2 English model                                                                                            |
Lysandre's avatar
Lysandre committed
94
95
96
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-xl``                                                | | 48-layer, 1600-hidden, 25-heads, 1558M parameters.                                                                                  |
|                   |                                                            | | OpenAI's XL-sized GPT-2 English model                                                                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
97
98
99
100
101
102
103
104
105
106
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Transformer-XL    | ``transfo-xl-wt103``                                       | | 18-layer, 1024-hidden, 16-heads, 257M parameters.                                                                                   |
|                   |                                                            | | English model trained on wikitext-103                                                                                               |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| XLNet             | ``xlnet-base-cased``                                       | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | XLNet English model                                                                                                                 |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlnet-large-cased``                                      | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | XLNet Large English model                                                                                                           |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
107
| XLM               | ``xlm-mlm-en-2048``                                        | | 12-layer, 2048-hidden, 16-heads                                                                                                     |
LysandreJik's avatar
Doc  
LysandreJik committed
108
109
|                   |                                                            | | XLM English model                                                                                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
110
|                   | ``xlm-mlm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
111
|                   |                                                            | | XLM English-German model trained on the concatenation of English and German wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
112
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
113
|                   | ``xlm-mlm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
114
|                   |                                                            | | XLM English-French model trained on the concatenation of English and French wikipedia                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
115
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
116
|                   | ``xlm-mlm-enro-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
117
118
119
120
121
122
123
124
|                   |                                                            | | XLM English-Romanian Multi-language model                                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-xnli15-1024``                                    | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-tlm-xnli15-1024``                                | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
125
126
|                   | ``xlm-clm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
|                   |                                                            | | XLM English-French model trained with CLM (Causal Language Modeling) on the concatenation of English and French wikipedia           |
LysandreJik's avatar
Doc  
LysandreJik committed
127
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
128
|                   | ``xlm-clm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
thomwolf's avatar
thomwolf committed
129
|                   |                                                            | | XLM English-German model trained with CLM (Causal Language Modeling) on the concatenation of English and German wikipedia           |
LysandreJik's avatar
LysandreJik committed
130
131
132
133
134
135
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-17-1280``                                        | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 17 languages.                                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-100-1280``                                       | | 16-layer, 1280-hidden, 16-heads                                                                                                     |
|                   |                                                            | | XLM model trained with MLM (Masked Language Modeling) on 100 languages.                                                             |
LysandreJik's avatar
Doc  
LysandreJik committed
136
137
138
139
140
141
142
143
144
145
146
147
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| RoBERTa           | ``roberta-base``                                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | RoBERTa using the BERT-base architecture                                                                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large``                                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | RoBERTa using the BERT-large architecture                                                                                           |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-mnli``                                     | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned on `MNLI <http://www.nyu.edu/projects/bowman/multinli/>`__.                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
148
149
150
151
152
153
154
155
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-base-openai-detector``                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | ``roberta-base`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                             |
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-openai-detector``                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned by OpenAI on the outputs of the 1.5B-parameter GPT-2 model.                                            |
|                   |                                                            | (see `details <https://github.com/openai/gpt-2-output-dataset/tree/master/detector>`__)                                               |
LysandreJik's avatar
Doc  
LysandreJik committed
156
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
LysandreJik's avatar
LysandreJik committed
157
158
| DistilBERT        | ``distilbert-base-uncased``                                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint                                                   |
VictorSanh's avatar
VictorSanh committed
159
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
160
161
162
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-uncased-distilled-squad``                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint, with an additional linear layer.                 |
VictorSanh's avatar
VictorSanh committed
163
164
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
VictorSanh's avatar
VictorSanh committed
165
|                   | ``distilgpt2``                                             | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
VictorSanh's avatar
VictorSanh committed
166
167
|                   |                                                            | | The DistilGPT2 model distilled from the GPT2 model `gpt2` checkpoint.                                                               |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
VictorSanh's avatar
VictorSanh committed
168
169
170
171
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilroberta-base``                                     | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
|                   |                                                            | | The DistilRoBERTa model distilled from the RoBERTa model `roberta-base` checkpoint.                                                 |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
172
173
174
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-german-cased``                           | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The German DistilBERT model distilled from the German DBMDZ BERT model `bert-base-german-dbmdz-cased` checkpoint.                   |
VictorSanh's avatar
VictorSanh committed
175
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
VictorSanh's avatar
VictorSanh committed
176
177
178
179
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-multilingual-cased``                     | | 6-layer, 768-hidden, 12-heads, 134M parameters                                                                                      |
|                   |                                                            | | The multilingual DistilBERT model distilled from the Multilingual BERT model `bert-base-multilingual-cased` checkpoint.             |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
LysandreJik's avatar
LysandreJik committed
180
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
thomwolf's avatar
thomwolf committed
181
182
183
| CTRL              | ``ctrl``                                                   | | 48-layer, 1280-hidden, 16-heads, 1.6B parameters                                                                                    |
|                   |                                                            | | Salesforce's Large-sized CTRL English model                                                                                         |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
184
185
186
187
| CamemBERT         | ``camembert-base``                                         | | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                                     |
|                   |                                                            | | CamemBERT using the BERT-base architecture                                                                                          |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/camembert>`__)                                                 |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
188
189
| ALBERT            | ``albert-base-v1``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model                                                                                                                   |
190
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
191
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
192
193
|                   | ``albert-large-v1``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model                                                                                                                  |
194
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
195
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
196
197
|                   | ``albert-xlarge-v1``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model                                                                                                                 |
198
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
199
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
200
201
|                   | ``albert-xxlarge-v1``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model                                                                                                                |
202
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
203
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
204
205
|                   | ``albert-base-v2``                                         | | 12 repeating layers, 128 embedding, 768-hidden, 12-heads, 11M parameters                                                            |
|                   |                                                            | | ALBERT base model with no dropout, additional training data and longer training                                                     |
206
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
207
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
208
209
|                   | ``albert-large-v2``                                        | | 24 repeating layers, 128 embedding, 1024-hidden, 16-heads, 17M parameters                                                           |
|                   |                                                            | | ALBERT large model with no dropout, additional training data and longer training                                                    |
210
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
211
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
212
213
|                   | ``albert-xlarge-v2``                                       | | 24 repeating layers, 128 embedding, 2048-hidden, 16-heads, 58M parameters                                                           |
|                   |                                                            | | ALBERT xlarge model with no dropout, additional training data and longer training                                                   |
214
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
215
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
Lysandre's avatar
Lysandre committed
216
217
|                   | ``albert-xxlarge-v2``                                      | | 12 repeating layer, 128 embedding, 4096-hidden, 64-heads, 223M parameters                                                           |
|                   |                                                            | | ALBERT xxlarge model with no dropout, additional training data and longer training                                                  |
218
|                   |                                                            | (see `details <https://github.com/google-research/ALBERT>`__)                                                                         |
Lysandre's avatar
Lysandre committed
219
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
| T5                | ``t5-small``                                               | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint                                                   |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``t5-base``                                                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint, with an additional linear layer.                 |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``t5-large``                                               | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
|                   |                                                            | | The DistilGPT2 model distilled from the GPT2 model `gpt2` checkpoint.                                                               |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``t5-3b``                                                  | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
|                   |                                                            | | The DistilRoBERTa model distilled from the RoBERTa model `roberta-base` checkpoint.                                                 |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``t5-11b``                                                 | | 6-layer, 768-hidden, 12-heads, 82M parameters                                                                                       |
|                   |                                                            | | The DistilRoBERTa model distilled from the RoBERTa model `roberta-base` checkpoint.                                                 |
|                   |                                                            | (see `details <https://github.com/huggingface/transformers/tree/master/examples/distillation>`__)                                     |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
240
241


242
.. <https://huggingface.co/transformers/examples.html>`__