Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
9d381e7b
"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "9ecd83dace3961eaa161405814b00ea595c86451"
Commit
9d381e7b
authored
Jul 17, 2019
by
LysandreJik
Browse files
Fixed incorrect links in the PretrainedModel
parent
117ed929
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
4 deletions
+4
-4
docs/source/pretrained_models.rst
docs/source/pretrained_models.rst
+4
-4
No files found.
docs/source/pretrained_models.rst
View file @
9d381e7b
...
@@ -43,8 +43,8 @@ Here is the full list of the currently provided pretrained models together with
...
@@ -43,8 +43,8 @@ Here is the full list of the currently provided pretrained models together with
| | | (see `details <https://github.com/google-research/bert/#bert>`__) |
| | | (see `details <https://github.com/google-research/bert/#bert>`__) |
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| | ``bert-large-uncased-whole-word-masking-finetuned-squad`` | 24-layer, 1024-hidden, 16-heads, 340M parameters |
| | ``bert-large-uncased-whole-word-masking-finetuned-squad`` | 24-layer, 1024-hidden, 16-heads, 340M parameters |
| | | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD
|
| | | The ``bert-large-uncased-whole-word-masking`` model fine-tuned on SQuAD
(see details of fine-tuning in the
|
| | |
(see details of fine-tuning in the `example section`__)
|
| | |
`example section <https://github.com/huggingface/pytorch-transformers/tree/master/examples>`__)
|
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| | ``bert-large-cased-whole-word-masking-finetuned-squad`` | 24-layer, 1024-hidden, 16-heads, 340M parameters |
| | ``bert-large-cased-whole-word-masking-finetuned-squad`` | 24-layer, 1024-hidden, 16-heads, 340M parameters |
| | | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD |
| | | The ``bert-large-cased-whole-word-masking`` model fine-tuned on SQuAD |
...
@@ -85,10 +85,10 @@ Here is the full list of the currently provided pretrained models together with
...
@@ -85,10 +85,10 @@ Here is the full list of the currently provided pretrained models together with
| | | XLM English-Romanian Multi-language model |
| | | XLM English-Romanian Multi-language model |
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| | ``xlm-mlm-xnli15-1024`` | 12-layer, 1024-hidden, 8-heads |
| | ``xlm-mlm-xnli15-1024`` | 12-layer, 1024-hidden, 8-heads |
| | | XLM Model pre-trained with MLM on the `15 XNLI languages<https://github.com/facebookresearch/XNLI>`__.
|
| | | XLM Model pre-trained with MLM on the `15 XNLI languages
<https://github.com/facebookresearch/XNLI>`__. |
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| | ``xlm-mlm-tlm-xnli15-1024`` | 12-layer, 1024-hidden, 8-heads |
| | ``xlm-mlm-tlm-xnli15-1024`` | 12-layer, 1024-hidden, 8-heads |
| | | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages<https://github.com/facebookresearch/XNLI>`__.
|
| | | XLM Model pre-trained with MLM + TLM on the `15 XNLI languages
<https://github.com/facebookresearch/XNLI>`__. |
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| +------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------+
| | ``xlm-clm-enfr-1024`` | 12-layer, 1024-hidden, 8-heads |
| | ``xlm-clm-enfr-1024`` | 12-layer, 1024-hidden, 8-heads |
| | | XLM English model trained with CLM (Causal Language Modeling) |
| | | XLM English model trained with CLM (Causal Language Modeling) |
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment