roberta-prelayernorm.md 4.81 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

15
16
17
18
19
20
21
22
23
24
25
26
27
-->

# RoBERTa-PreLayerNorm

## Overview

The RoBERTa-PreLayerNorm model was proposed in [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
It is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/).

The abstract from the paper is the following:

*fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs.*

28
This model was contributed by [andreasmaden](https://huggingface.co/andreasmadsen).
29
30
31
The original code can be found [here](https://github.com/princeton-nlp/DinkyTrain).

## Usage tips
32
33
34
35

- The implementation is the same as [Roberta](roberta) except instead of using _Add and Norm_ it does _Norm and Add_. _Add_ and _Norm_ refers to the Addition and LayerNormalization as described in [Attention Is All You Need](https://arxiv.org/abs/1706.03762).
- This is identical to using the `--encoder-normalize-before` flag in [fairseq](https://fairseq.readthedocs.io/).

36
## Resources
37

38
39
40
41
42
43
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
44
45
46
47
48

## RobertaPreLayerNormConfig

[[autodoc]] RobertaPreLayerNormConfig

49
50
51
<frameworkcontent>
<pt>

52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
## RobertaPreLayerNormModel

[[autodoc]] RobertaPreLayerNormModel
    - forward

## RobertaPreLayerNormForCausalLM

[[autodoc]] RobertaPreLayerNormForCausalLM
    - forward

## RobertaPreLayerNormForMaskedLM

[[autodoc]] RobertaPreLayerNormForMaskedLM
    - forward

## RobertaPreLayerNormForSequenceClassification

[[autodoc]] RobertaPreLayerNormForSequenceClassification
    - forward

## RobertaPreLayerNormForMultipleChoice

[[autodoc]] RobertaPreLayerNormForMultipleChoice
    - forward

## RobertaPreLayerNormForTokenClassification

[[autodoc]] RobertaPreLayerNormForTokenClassification
    - forward

## RobertaPreLayerNormForQuestionAnswering

[[autodoc]] RobertaPreLayerNormForQuestionAnswering
    - forward

87
88
89
</pt>
<tf>

90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
## TFRobertaPreLayerNormModel

[[autodoc]] TFRobertaPreLayerNormModel
    - call

## TFRobertaPreLayerNormForCausalLM

[[autodoc]] TFRobertaPreLayerNormForCausalLM
    - call

## TFRobertaPreLayerNormForMaskedLM

[[autodoc]] TFRobertaPreLayerNormForMaskedLM
    - call

## TFRobertaPreLayerNormForSequenceClassification

[[autodoc]] TFRobertaPreLayerNormForSequenceClassification
    - call

## TFRobertaPreLayerNormForMultipleChoice

[[autodoc]] TFRobertaPreLayerNormForMultipleChoice
    - call

## TFRobertaPreLayerNormForTokenClassification

[[autodoc]] TFRobertaPreLayerNormForTokenClassification
    - call

## TFRobertaPreLayerNormForQuestionAnswering

[[autodoc]] TFRobertaPreLayerNormForQuestionAnswering
    - call

125
126
127
</tf>
<jax>

128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
## FlaxRobertaPreLayerNormModel

[[autodoc]] FlaxRobertaPreLayerNormModel
    - __call__

## FlaxRobertaPreLayerNormForCausalLM

[[autodoc]] FlaxRobertaPreLayerNormForCausalLM
    - __call__

## FlaxRobertaPreLayerNormForMaskedLM

[[autodoc]] FlaxRobertaPreLayerNormForMaskedLM
    - __call__

## FlaxRobertaPreLayerNormForSequenceClassification

[[autodoc]] FlaxRobertaPreLayerNormForSequenceClassification
    - __call__

## FlaxRobertaPreLayerNormForMultipleChoice

[[autodoc]] FlaxRobertaPreLayerNormForMultipleChoice
    - __call__

## FlaxRobertaPreLayerNormForTokenClassification

[[autodoc]] FlaxRobertaPreLayerNormForTokenClassification
    - __call__

## FlaxRobertaPreLayerNormForQuestionAnswering

[[autodoc]] FlaxRobertaPreLayerNormForQuestionAnswering
    - __call__
162
163
164

</jax>
</frameworkcontent>