Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
3e1bc27e
Commit
3e1bc27e
authored
Jan 20, 2020
by
Lysandre
Committed by
Lysandre Debut
Jan 23, 2020
Browse files
Pytorch RoBERTa
parent
f44ff574
Changes
3
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
234 additions
and
280 deletions
+234
-280
docs/source/model_doc/roberta.rst
docs/source/model_doc/roberta.rst
+18
-8
src/transformers/modeling_bert.py
src/transformers/modeling_bert.py
+1
-1
src/transformers/modeling_roberta.py
src/transformers/modeling_roberta.py
+215
-271
No files found.
docs/source/model_doc/roberta.rst
View file @
3e1bc27e
RoBERTa
----------------------------------------------------
``RobertaConfig``
The RoBERTa model was proposed in `RoBERTa: A Robustly Optimized BERT Pretraining Approach`_
by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer,
Veselin Stoyanov. It is based on Google's BERT model released in 2018.
It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining
objective and training with much larger mini-batches and learning rates.
This implementation is the same as BertModel with a tiny embeddings tweak as well as a setup for Roberta pretrained
models.
RobertaConfig
~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.RobertaConfig
:members:
``
RobertaTokenizer
``
RobertaTokenizer
~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.RobertaTokenizer
:members:
``
RobertaModel
``
RobertaModel
~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.RobertaModel
:members:
``
RobertaForMaskedLM
``
RobertaForMaskedLM
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.RobertaForMaskedLM
:members:
``
RobertaForSequenceClassification
``
RobertaForSequenceClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.RobertaForSequenceClassification
...
...
@@ -42,21 +52,21 @@ RobertaForTokenClassification
.. autoclass:: transformers.RobertaForTokenClassification
:members:
``
TFRobertaModel
``
TFRobertaModel
~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.TFRobertaModel
:members:
``
TFRobertaForMaskedLM
``
TFRobertaForMaskedLM
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.TFRobertaForMaskedLM
:members:
``
TFRobertaForSequenceClassification
``
TFRobertaForSequenceClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.TFRobertaForSequenceClassification
...
...
src/transformers/modeling_bert.py
View file @
3e1bc27e
...
...
@@ -592,7 +592,7 @@ BERT_INPUTS_DOCSTRING = r"""
Mask to nullify selected heads of the self-attention modules.
Mask values selected in ``[0, 1]``:
:obj:`1` indicates the head is **not masked**, :obj:`0` indicates the head is **masked**.
input_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`, defaults to :obj:`None`):
input
s
_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`, defaults to :obj:`None`):
Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation.
This is useful if you want more control over how to convert `input_ids` indices into associated vectors
than the model's internal embedding lookup matrix.
...
...
src/transformers/modeling_roberta.py
View file @
3e1bc27e
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment