"examples/text-classification/run_glue.py" did not exist on "e70cdf083ddb8bfe298d43e6d70d698a3a2f56d3"
roberta.rst 3.41 KB
Newer Older
LysandreJik's avatar
Doc  
LysandreJik committed
1
2
3
RoBERTa
----------------------------------------------------

Lysandre's avatar
Lysandre committed
4
The RoBERTa model was proposed in `RoBERTa: A Robustly Optimized BERT Pretraining Approach <https://arxiv.org/abs/1907.11692>`_
Lysandre's avatar
Lysandre committed
5
6
7
8
9
10
by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer,
Veselin Stoyanov. It is based on Google's BERT model released in 2018.

It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining
objective and training with much larger mini-batches and learning rates.

Lysandre's avatar
Lysandre committed
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
The abstract from the paper is the following:

*Language model pretraining has led to significant performance gains but careful comparison between different
approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes,
and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication
study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and
training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of
every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These
results highlight the importance of previously overlooked design choices, and raise questions about the source
of recently reported improvements. We release our models and code.*

Tips:

- This implementation is the same as :class:`~transformers.BertModel` with a tiny embeddings tweak as well as a
  setup for Roberta pretrained models.
Lysandre's avatar
Lysandre committed
26
27
28
- RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a
  different pre-training scheme.
- RoBERTa doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `</s>`)
Lysandre's avatar
Lysandre committed
29
- `Camembert <./camembert.html>`__ is a wrapper around RoBERTa. Refer to this page for usage examples.
Lysandre's avatar
Lysandre committed
30

31
32
33
The original code can be found `here <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`_.


Lysandre's avatar
Lysandre committed
34
RobertaConfig
LysandreJik's avatar
Doc  
LysandreJik committed
35
36
~~~~~~~~~~~~~~~~~~~~~

37
.. autoclass:: transformers.RobertaConfig
LysandreJik's avatar
Doc  
LysandreJik committed
38
39
40
    :members:


Lysandre's avatar
Lysandre committed
41
RobertaTokenizer
LysandreJik's avatar
Doc  
LysandreJik committed
42
43
~~~~~~~~~~~~~~~~~~~~~

44
.. autoclass:: transformers.RobertaTokenizer
Lysandre Debut's avatar
Lysandre Debut committed
45
46
    :members: build_inputs_with_special_tokens, get_special_tokens_mask,
        create_token_type_ids_from_sequences, save_vocabulary
LysandreJik's avatar
Doc  
LysandreJik committed
47
48


Lysandre's avatar
Lysandre committed
49
RobertaModel
LysandreJik's avatar
Doc  
LysandreJik committed
50
51
~~~~~~~~~~~~~~~~~~~~

52
.. autoclass:: transformers.RobertaModel
LysandreJik's avatar
Doc  
LysandreJik committed
53
54
55
    :members:


Lysandre's avatar
Lysandre committed
56
RobertaForMaskedLM
LysandreJik's avatar
Doc  
LysandreJik committed
57
58
~~~~~~~~~~~~~~~~~~~~~~~~~~

59
.. autoclass:: transformers.RobertaForMaskedLM
LysandreJik's avatar
Doc  
LysandreJik committed
60
61
62
    :members:


Lysandre's avatar
Lysandre committed
63
RobertaForSequenceClassification
LysandreJik's avatar
Doc  
LysandreJik committed
64
65
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

66
.. autoclass:: transformers.RobertaForSequenceClassification
LysandreJik's avatar
Doc  
LysandreJik committed
67
    :members:
LysandreJik's avatar
LysandreJik committed
68
69


Lysandre's avatar
Lysandre committed
70
71
72
73
74
75
RobertaForTokenClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.RobertaForTokenClassification
    :members:

Lysandre's avatar
Lysandre committed
76
TFRobertaModel
LysandreJik's avatar
LysandreJik committed
77
78
~~~~~~~~~~~~~~~~~~~~

79
.. autoclass:: transformers.TFRobertaModel
LysandreJik's avatar
LysandreJik committed
80
81
82
    :members:


Lysandre's avatar
Lysandre committed
83
TFRobertaForMaskedLM
LysandreJik's avatar
LysandreJik committed
84
85
~~~~~~~~~~~~~~~~~~~~~~~~~~

86
.. autoclass:: transformers.TFRobertaForMaskedLM
LysandreJik's avatar
LysandreJik committed
87
88
89
    :members:


Lysandre's avatar
Lysandre committed
90
TFRobertaForSequenceClassification
LysandreJik's avatar
LysandreJik committed
91
92
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

93
.. autoclass:: transformers.TFRobertaForSequenceClassification
LysandreJik's avatar
LysandreJik committed
94
    :members:
Lysandre's avatar
Lysandre committed
95
96
97
98
99
100
101


TFRobertaForTokenClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.TFRobertaForTokenClassification
    :members: