pretrained_models.rst 24.8 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
2
3
4
5
6
Pretrained models
================================================

Here is the full list of the currently provided pretrained models together with a short presentation of each model.


LysandreJik's avatar
Doc  
LysandreJik committed
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Architecture      | Shortcut name                                              | Details of the model                                                                                                                  |
+===================+============================================================+=======================================================================================================================================+
| BERT              | ``bert-base-uncased``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased``                                     | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text.                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased``                                        | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased``                                       | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text.                                                                                                      |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-uncased``                         | | (Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                        |
|                   |                                                            | | Trained on lower-cased text in the top 102 languages with the largest Wikipedias                                                    |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-cased``                           | | (New, **recommended**) 12-layer, 768-hidden, 12-heads, 110M parameters.                                                             |
|                   |                                                            | | Trained on cased text in the top 104 languages with the largest Wikipedias                                                          |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__).                                              |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-chinese``                                      | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased Chinese Simplified and Traditional text.                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-cased``                                 | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | Trained on cased German text by Deepset.ai                                                                                          |
|                   |                                                            | (see `details on deepset.ai website <https://deepset.ai/german-bert>`__).                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking``                  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on lower-cased English text using Whole-Word-Masking                                                                        |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking``                    | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | Trained on cased English text using Whole-Word-Masking                                                                              |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__).                                                                    |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking-finetuned-squad``  | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD                                                             |
|                   |                                                            | (see details of fine-tuning in the `example section <https://github.com/huggingface/pytorch-transformers/tree/master/examples>`__).   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking-finetuned-squad``    | | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                                    |
|                   |                                                            | | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD                                                               |
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/pytorch-transformers/examples.html>`__)                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased-finetuned-mrpc``                         | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | The ``bert-base-cased`` model fine-tuned on MRPC                                                                                    |
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/pytorch-transformers/examples.html>`__)                   |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT               | ``openai-gpt``                                             | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT English model                                                                                                            |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| GPT-2             | ``gpt2``                                                   | | 12-layer, 768-hidden, 12-heads, 117M parameters.                                                                                    |
|                   |                                                            | | OpenAI GPT-2 English model                                                                                                          |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-medium``                                            | | 24-layer, 1024-hidden, 16-heads, 345M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Medium-sized GPT-2 English model                                                                                           |
65
66
67
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-large``                                             | | 36-layer, 1280-hidden, 20-heads, 774M parameters.                                                                                   |
|                   |                                                            | | OpenAI's Large-sized GPT-2 English model                                                                                            |
LysandreJik's avatar
Doc  
LysandreJik committed
68
69
70
71
72
73
74
75
76
77
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| Transformer-XL    | ``transfo-xl-wt103``                                       | | 18-layer, 1024-hidden, 16-heads, 257M parameters.                                                                                   |
|                   |                                                            | | English model trained on wikitext-103                                                                                               |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| XLNet             | ``xlnet-base-cased``                                       | | 12-layer, 768-hidden, 12-heads, 110M parameters.                                                                                    |
|                   |                                                            | | XLNet English model                                                                                                                 |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlnet-large-cased``                                      | | 24-layer, 1024-hidden, 16-heads, 340M parameters.                                                                                   |
|                   |                                                            | | XLNet Large English model                                                                                                           |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
78
| XLM               | ``xlm-mlm-en-2048``                                        | | 12-layer, 2048-hidden, 16-heads                                                                                                     |
LysandreJik's avatar
Doc  
LysandreJik committed
79
80
|                   |                                                            | | XLM English model                                                                                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
81
|                   | ``xlm-mlm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
82
83
|                   |                                                            | | XLM English-German Multi-language model                                                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
84
|                   | ``xlm-mlm-enfr-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
85
86
|                   |                                                            | | XLM English-French Multi-language model                                                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
87
|                   | ``xlm-mlm-enro-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
88
89
90
91
92
93
94
95
96
97
98
|                   |                                                            | | XLM English-Romanian Multi-language model                                                                                           |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-xnli15-1024``                                    | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-tlm-xnli15-1024``                                | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-clm-enfr-1024``                                      | | 12-layer, 1024-hidden, 8-heads                                                                                                      |
|                   |                                                            | | XLM English model trained with CLM (Causal Language Modeling)                                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
99
|                   | ``xlm-clm-ende-1024``                                      | | 6-layer, 1024-hidden, 8-heads                                                                                                       |
LysandreJik's avatar
Doc  
LysandreJik committed
100
101
102
103
104
105
106
107
108
109
110
111
112
113
|                   |                                                            | | XLM English-German Multi-language model trained with CLM (Causal Language Modeling)                                                 |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| RoBERTa           | ``roberta-base``                                           | | 12-layer, 768-hidden, 12-heads, 125M parameters                                                                                     |
|                   |                                                            | | RoBERTa using the BERT-base architecture                                                                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large``                                          | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | RoBERTa using the BERT-large architecture                                                                                           |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``roberta-large-mnli``                                     | | 24-layer, 1024-hidden, 16-heads, 355M parameters                                                                                    |
|                   |                                                            | | ``roberta-large`` fine-tuned on `MNLI <http://www.nyu.edu/projects/bowman/multinli/>`__.                                            |
|                   |                                                            | (see `details <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`__)                                                   |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
LysandreJik's avatar
LysandreJik committed
114
115
116
117
118
119
120
121
| DistilBERT        | ``distilbert-base-uncased``                                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint                                                   |
|                   |                                                            | (see `details <https://medium.com/@victorsanh/8cf3380435b5>`__)                                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
|                   | ``distilbert-base-uncased-distilled-squad``                | | 6-layer, 768-hidden, 12-heads, 66M parameters                                                                                       |
|                   |                                                            | | The DistilBERT model distilled from the BERT model `bert-base-uncased` checkpoint, with an additional linear layer.                 |
|                   |                                                            | (see `details <https://medium.com/@victorsanh/8cf3380435b5>`__)                                                                       |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
122
123

.. <https://huggingface.co/pytorch-transformers/examples.html>`__