roberta.rst 3.68 KB
Newer Older
LysandreJik's avatar
Doc  
LysandreJik committed
1
2
3
RoBERTa
----------------------------------------------------

Lysandre's avatar
Lysandre committed
4
The RoBERTa model was proposed in `RoBERTa: A Robustly Optimized BERT Pretraining Approach <https://arxiv.org/abs/1907.11692>`_
Lysandre's avatar
Lysandre committed
5
6
7
8
9
10
by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer,
Veselin Stoyanov. It is based on Google's BERT model released in 2018.

It builds on BERT and modifies key hyperparameters, removing the next-sentence pretraining
objective and training with much larger mini-batches and learning rates.

Lysandre's avatar
Lysandre committed
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
The abstract from the paper is the following:

*Language model pretraining has led to significant performance gains but careful comparison between different
approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes,
and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication
study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and
training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of
every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These
results highlight the importance of previously overlooked design choices, and raise questions about the source
of recently reported improvements. We release our models and code.*

Tips:

- This implementation is the same as :class:`~transformers.BertModel` with a tiny embeddings tweak as well as a
  setup for Roberta pretrained models.
Lysandre's avatar
Lysandre committed
26
27
28
- RoBERTa has the same architecture as BERT, but uses a byte-level BPE as a tokenizer (same as GPT-2) and uses a
  different pre-training scheme.
- RoBERTa doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `</s>`)
Lysandre's avatar
Lysandre committed
29
- `Camembert <./camembert.html>`__ is a wrapper around RoBERTa. Refer to this page for usage examples.
Lysandre's avatar
Lysandre committed
30

31
32
33
The original code can be found `here <https://github.com/pytorch/fairseq/tree/master/examples/roberta>`_.


Lysandre's avatar
Lysandre committed
34
RobertaConfig
LysandreJik's avatar
Doc  
LysandreJik committed
35
36
~~~~~~~~~~~~~~~~~~~~~

37
.. autoclass:: transformers.RobertaConfig
LysandreJik's avatar
Doc  
LysandreJik committed
38
39
40
    :members:


Lysandre's avatar
Lysandre committed
41
RobertaTokenizer
LysandreJik's avatar
Doc  
LysandreJik committed
42
43
~~~~~~~~~~~~~~~~~~~~~

44
.. autoclass:: transformers.RobertaTokenizer
Lysandre Debut's avatar
Lysandre Debut committed
45
46
    :members: build_inputs_with_special_tokens, get_special_tokens_mask,
        create_token_type_ids_from_sequences, save_vocabulary
LysandreJik's avatar
Doc  
LysandreJik committed
47
48


49
50
51
52
53
54
55
RobertaTokenizerFast
~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.RobertaTokenizerFast
    :members: build_inputs_with_special_tokens


Lysandre's avatar
Lysandre committed
56
RobertaModel
LysandreJik's avatar
Doc  
LysandreJik committed
57
58
~~~~~~~~~~~~~~~~~~~~

59
.. autoclass:: transformers.RobertaModel
LysandreJik's avatar
Doc  
LysandreJik committed
60
61
62
    :members:


Lysandre's avatar
Lysandre committed
63
RobertaForMaskedLM
LysandreJik's avatar
Doc  
LysandreJik committed
64
65
~~~~~~~~~~~~~~~~~~~~~~~~~~

66
.. autoclass:: transformers.RobertaForMaskedLM
LysandreJik's avatar
Doc  
LysandreJik committed
67
68
69
    :members:


Lysandre's avatar
Lysandre committed
70
RobertaForSequenceClassification
LysandreJik's avatar
Doc  
LysandreJik committed
71
72
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

73
.. autoclass:: transformers.RobertaForSequenceClassification
LysandreJik's avatar
Doc  
LysandreJik committed
74
    :members:
LysandreJik's avatar
LysandreJik committed
75
76


77
78
79
80
81
82
83
RobertaForMultipleChoice
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.RobertaForMultipleChoice
    :members:


Lysandre's avatar
Lysandre committed
84
85
86
87
88
89
RobertaForTokenClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.RobertaForTokenClassification
    :members:

Lysandre's avatar
Lysandre committed
90
TFRobertaModel
LysandreJik's avatar
LysandreJik committed
91
92
~~~~~~~~~~~~~~~~~~~~

93
.. autoclass:: transformers.TFRobertaModel
LysandreJik's avatar
LysandreJik committed
94
95
96
    :members:


Lysandre's avatar
Lysandre committed
97
TFRobertaForMaskedLM
LysandreJik's avatar
LysandreJik committed
98
99
~~~~~~~~~~~~~~~~~~~~~~~~~~

100
.. autoclass:: transformers.TFRobertaForMaskedLM
LysandreJik's avatar
LysandreJik committed
101
102
103
    :members:


Lysandre's avatar
Lysandre committed
104
TFRobertaForSequenceClassification
LysandreJik's avatar
LysandreJik committed
105
106
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

107
.. autoclass:: transformers.TFRobertaForSequenceClassification
LysandreJik's avatar
LysandreJik committed
108
    :members:
Lysandre's avatar
Lysandre committed
109
110
111
112
113
114
115


TFRobertaForTokenClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.TFRobertaForTokenClassification
    :members: