"git@developer.sourcefind.cn:sugon_wxj/megatron-lm.git" did not exist on "4c92ca82c5c6f7157246abdaa83a0d65aab19630"
pretrained_models.rst 18.8 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
2
3
4
5
6
Pretrained models
================================================

Here is the full list of the currently provided pretrained models together with a short presentation of each model.


7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| Architecture      | Shortcut name                                              | Details of the model                                                                                                      |
+===================+============================================================+===========================================================================================================================+
| BERT              | ``bert-base-uncased``                                      | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                           |
|                   |                                                            | Trained on lower-cased English text                                                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased``                                     | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                          |
|                   |                                                            | Trained on lower-cased English text                                                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased``                                        | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                           |
|                   |                                                            | Trained on cased English text                                                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased``                                       | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                          |
|                   |                                                            | Trained on cased English text                                                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-uncased``                         | (Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters                                               |
|                   |                                                            | Trained on lower-cased text in the top 102 languages with the largest Wikipedias                                          |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__)                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-multilingual-cased``                           | (New, **recommended**) 12-layer, 768-hidden, 12-heads, 110M parameters                                                    |
|                   |                                                            | Trained on cased text in the top 104 languages with the largest Wikipedias                                                |
|                   |                                                            | (see `details <https://github.com/google-research/bert/blob/master/multilingual.md>`__)                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-chinese``                                      | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                           |
|                   |                                                            | Trained on cased Chinese Simplified and Traditional text                                                                  |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-german-cased``                                 | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                           |
|                   |                                                            | Trained on cased German text by Deepset.ai                                                                                |
|                   |                                                            | (see `details on deepset.ai website <https://deepset.ai/german-bert>`__)                                                  |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking``                  | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                          |
|                   |                                                            | Trained on lower-cased English text using Whole-Word-Masking                                                              |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__)                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking``                    | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                          |
|                   |                                                            | Trained on cased English text using Whole-Word-Masking                                                                    |
|                   |                                                            | (see `details <https://github.com/google-research/bert/#bert>`__)                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-uncased-whole-word-masking-finetuned-squad``  | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                          |
46
47
|                   |                                                            | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD (see details of fine-tuning in the                |
|                   |                                                            | `example section <https://github.com/huggingface/pytorch-transformers/tree/master/examples>`__)                           |
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-large-cased-whole-word-masking-finetuned-squad``    | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                          |
|                   |                                                            | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD                                                     |
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/pytorch-transformers/examples.html>`__)       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``bert-base-cased-finetuned-mrpc``                         | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                           |
|                   |                                                            | The ``bert-base-cased`` model fine-tuned on MRPC                                                                          |
|                   |                                                            | (see `details of fine-tuning in the example section <https://huggingface.co/pytorch-transformers/examples.html>`__)       |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| GPT               | ``openai-gpt``                                             | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                           |
|                   |                                                            | OpenAI GPT English model                                                                                                  |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| GPT-2             | ``gpt2``                                                   | 12-layer, 768-hidden, 12-heads, 117M parameters                                                                           |
|                   |                                                            | OpenAI GPT-2 English model                                                                                                |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``gpt2-medium``                                            | 24-layer, 1024-hidden, 16-heads, 345M parameters                                                                          |
|                   |                                                            | OpenAI's Medium-sized GPT-2 English model                                                                                 |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| Transformer-XL    | ``transfo-xl-wt103``                                       | 18-layer, 1024-hidden, 16-heads, 257M parameters                                                                          |
|                   |                                                            | English model trained on wikitext-103                                                                                     |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| XLNet             | ``xlnet-base-cased``                                       | 12-layer, 768-hidden, 12-heads, 110M parameters                                                                           |
|                   |                                                            | XLNet English model                                                                                                       |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlnet-large-cased``                                      | 24-layer, 1024-hidden, 16-heads, 340M parameters                                                                          |
|                   |                                                            | XLNet Large English model                                                                                                 |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| XLM               | ``xlm-mlm-en-2048``                                        | 12-layer, 1024-hidden, 8-heads                                                                                            |
|                   |                                                            | XLM English model                                                                                                         |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-ende-1024``                                      | 12-layer, 1024-hidden, 8-heads                                                                                            |
|                   |                                                            | XLM English-German Multi-language model                                                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-enfr-1024``                                      | 12-layer, 1024-hidden, 8-heads                                                                                            |
|                   |                                                            | XLM English-French Multi-language model                                                                                   |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-enro-1024``                                      | 12-layer, 1024-hidden, 8-heads                                                                                            |
|                   |                                                            | XLM English-Romanian Multi-language model                                                                                 |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-xnli15-1024``                                    | 12-layer, 1024-hidden, 8-heads                                                                                            |
88
|                   |                                                            | XLM Model pre-trained with MLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.                   |
89
90
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-mlm-tlm-xnli15-1024``                                | 12-layer, 1024-hidden, 8-heads                                                                                            |
91
|                   |                                                            | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages <https://github.com/facebookresearch/XNLI>`__.             |
92
93
94
95
96
97
98
99
100
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-clm-enfr-1024``                                      | 12-layer, 1024-hidden, 8-heads                                                                                            |
|                   |                                                            | XLM English model trained with CLM (Causal Language Modeling)                                                             |
|                   +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
|                   | ``xlm-clm-ende-1024``                                      | 12-layer, 1024-hidden, 8-heads                                                                                            |
|                   |                                                            | XLM English-German Multi-language model trained with CLM (Causal Language Modeling)                                       |
+-------------------+------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+

.. <https://huggingface.co/pytorch-transformers/examples.html>`__