"test/srt/vscode:/vscode.git/clone" did not exist on "0ac61146947ad5bb202ce08a81431eb0daf43aef"
longformer.rst 10.2 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
.. 
    Copyright 2020 The HuggingFace Team. All rights reserved.

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
    the License. You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
    an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
    specific language governing permissions and limitations under the License.

13
Longformer
Sylvain Gugger's avatar
Sylvain Gugger committed
14
15
16
17
-----------------------------------------------------------------------------------------------------------------------

**DISCLAIMER:** This model is still a work in progress, if you see something strange, file a `Github Issue
<https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title>`__.
18
19

Overview
Sylvain Gugger's avatar
Sylvain Gugger committed
20
21
22
23
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The Longformer model was presented in `Longformer: The Long-Document Transformer
<https://arxiv.org/pdf/2004.05150.pdf>`__ by Iz Beltagy, Matthew E. Peters, Arman Cohan.
24

Sylvain Gugger's avatar
Sylvain Gugger committed
25
The abstract from the paper is the following:
26

Sylvain Gugger's avatar
Sylvain Gugger committed
27
28
29
30
31
32
33
34
35
36
*Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA.*

37
38
39
40
41
42
Tips:

- Since the Longformer is based on RoBERTa, it doesn't have :obj:`token_type_ids`. You don't need to indicate which
  token belongs to which segment. Just separate your segments with the separation token :obj:`tokenizer.sep_token` (or
  :obj:`</s>`).

43
44
This model was contributed by `beltagy <https://huggingface.co/beltagy>`__. The Authors' code can be found `here
<https://github.com/allenai/longformer>`__.
45
46

Longformer Self Attention
Sylvain Gugger's avatar
Sylvain Gugger committed
47
48
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Sylvain Gugger's avatar
Sylvain Gugger committed
49
50
51
Longformer self attention employs self attention on both a "local" context and a "global" context. Most tokens only
attend "locally" to each other meaning that each token attends to its :math:`\frac{1}{2} w` previous tokens and
:math:`\frac{1}{2} w` succeding tokens with :math:`w` being the window length as defined in
Sylvain Gugger's avatar
Sylvain Gugger committed
52
53
54
:obj:`config.attention_window`. Note that :obj:`config.attention_window` can be of type :obj:`List` to define a
different :math:`w` for each layer. A selected few tokens attend "globally" to all other tokens, as it is
conventionally done for all tokens in :obj:`BertSelfAttention`.
55

Sylvain Gugger's avatar
Sylvain Gugger committed
56
57
58
Note that "locally" and "globally" attending tokens are projected by different query, key and value matrices. Also note
that every "locally" attending token not only attends to tokens within its window :math:`w`, but also to all "globally"
attending tokens so that global attention is *symmetric*.
59

Sylvain Gugger's avatar
Sylvain Gugger committed
60
61
The user can define which tokens attend "locally" and which tokens attend "globally" by setting the tensor
:obj:`global_attention_mask` at run-time appropriately. All Longformer models employ the following logic for
Sylvain Gugger's avatar
Sylvain Gugger committed
62
:obj:`global_attention_mask`:
63

Sylvain Gugger's avatar
Sylvain Gugger committed
64
65
- 0: the token attends "locally",
- 1: the token attends "globally".
66

Sylvain Gugger's avatar
Sylvain Gugger committed
67
68
69
70
71
72
73
74
75
For more information please also refer to :meth:`~transformers.LongformerModel.forward` method.

Using Longformer self attention, the memory and time complexity of the query-key matmul operation, which usually
represents the memory and time bottleneck, can be reduced from :math:`\mathcal{O}(n_s \times n_s)` to
:math:`\mathcal{O}(n_s \times w)`, with :math:`n_s` being the sequence length and :math:`w` being the average window
size. It is assumed that the number of "globally" attending tokens is insignificant as compared to the number of
"locally" attending tokens.

For more information, please refer to the official `paper <https://arxiv.org/pdf/2004.05150.pdf>`__.
76
77
78


Training
Sylvain Gugger's avatar
Sylvain Gugger committed
79
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80

Sylvain Gugger's avatar
Sylvain Gugger committed
81
82
:class:`~transformers.LongformerForMaskedLM` is trained the exact same way :class:`~transformers.RobertaForMaskedLM` is
trained and should be used as follows:
83

Sylvain Gugger's avatar
Sylvain Gugger committed
84
.. code-block::
85

Sylvain Gugger's avatar
Sylvain Gugger committed
86
87
88
89
    input_ids = tokenizer.encode('This is a sentence from [MASK] training data', return_tensors='pt')
    mlm_labels = tokenizer.encode('This is a sentence from the training data', return_tensors='pt')

    loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0]
90
91
92


LongformerConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
93
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
94
95
96
97
98
99

.. autoclass:: transformers.LongformerConfig
    :members:


LongformerTokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
100
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
101
102
103
104
105

.. autoclass:: transformers.LongformerTokenizer
    :members: 


Sylvain Gugger's avatar
Sylvain Gugger committed
106
LongformerTokenizerFast
Sylvain Gugger's avatar
Sylvain Gugger committed
107
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
108
109
110
111

.. autoclass:: transformers.LongformerTokenizerFast
    :members: 

112
113
114
Longformer specific outputs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Sylvain Gugger's avatar
Sylvain Gugger committed
115
.. autoclass:: transformers.models.longformer.modeling_longformer.LongformerBaseModelOutput
116
117
    :members: 

Sylvain Gugger's avatar
Sylvain Gugger committed
118
.. autoclass:: transformers.models.longformer.modeling_longformer.LongformerBaseModelOutputWithPooling
119
120
    :members: 

121
.. autoclass:: transformers.models.longformer.modeling_longformer.LongformerMaskedLMOutput
122
123
    :members: 

Sylvain Gugger's avatar
Sylvain Gugger committed
124
.. autoclass:: transformers.models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput
125
126
    :members: 

127
128
129
130
131
132
133
134
135
.. autoclass:: transformers.models.longformer.modeling_longformer.LongformerSequenceClassifierOutput
    :members: 

.. autoclass:: transformers.models.longformer.modeling_longformer.LongformerMultipleChoiceModelOutput
    :members: 

.. autoclass:: transformers.models.longformer.modeling_longformer.LongformerTokenClassifierOutput
    :members: 

Sylvain Gugger's avatar
Sylvain Gugger committed
136
.. autoclass:: transformers.models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput
137
138
    :members: 

Sylvain Gugger's avatar
Sylvain Gugger committed
139
.. autoclass:: transformers.models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutputWithPooling
140
141
    :members: 

142
143
144
.. autoclass:: transformers.models.longformer.modeling_tf_longformer.TFLongformerMaskedLMOutput
    :members: 

Sylvain Gugger's avatar
Sylvain Gugger committed
145
.. autoclass:: transformers.models.longformer.modeling_tf_longformer.TFLongformerQuestionAnsweringModelOutput
146
147
    :members: 

148
149
150
151
152
153
154
155
.. autoclass:: transformers.models.longformer.modeling_tf_longformer.TFLongformerSequenceClassifierOutput
    :members: 

.. autoclass:: transformers.models.longformer.modeling_tf_longformer.TFLongformerMultipleChoiceModelOutput
    :members: 

.. autoclass:: transformers.models.longformer.modeling_tf_longformer.TFLongformerTokenClassifierOutput
    :members: 
Sylvain Gugger's avatar
Sylvain Gugger committed
156

157
LongformerModel
Sylvain Gugger's avatar
Sylvain Gugger committed
158
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
159
160

.. autoclass:: transformers.LongformerModel
Sylvain Gugger's avatar
Sylvain Gugger committed
161
    :members: forward
162
163
164


LongformerForMaskedLM
Sylvain Gugger's avatar
Sylvain Gugger committed
165
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
166
167

.. autoclass:: transformers.LongformerForMaskedLM
Sylvain Gugger's avatar
Sylvain Gugger committed
168
    :members: forward
169
170


Sylvain Gugger's avatar
Sylvain Gugger committed
171
LongformerForSequenceClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
172
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
173

Sylvain Gugger's avatar
Sylvain Gugger committed
174
.. autoclass:: transformers.LongformerForSequenceClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
175
    :members: forward
176
177
178


LongformerForMultipleChoice
Sylvain Gugger's avatar
Sylvain Gugger committed
179
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
180
181

.. autoclass:: transformers.LongformerForMultipleChoice
Sylvain Gugger's avatar
Sylvain Gugger committed
182
    :members: forward
183
184
185


LongformerForTokenClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
186
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
187
188

.. autoclass:: transformers.LongformerForTokenClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
189
    :members: forward
190

Sylvain Gugger's avatar
Sylvain Gugger committed
191
192

LongformerForQuestionAnswering
Sylvain Gugger's avatar
Sylvain Gugger committed
193
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
194
195

.. autoclass:: transformers.LongformerForQuestionAnswering
Sylvain Gugger's avatar
Sylvain Gugger committed
196
    :members: forward
Patrick von Platen's avatar
Patrick von Platen committed
197
198
199


TFLongformerModel
Sylvain Gugger's avatar
Sylvain Gugger committed
200
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
201
202

.. autoclass:: transformers.TFLongformerModel
Sylvain Gugger's avatar
Sylvain Gugger committed
203
    :members: call
Patrick von Platen's avatar
Patrick von Platen committed
204
205
206


TFLongformerForMaskedLM
Sylvain Gugger's avatar
Sylvain Gugger committed
207
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
208
209

.. autoclass:: transformers.TFLongformerForMaskedLM
Sylvain Gugger's avatar
Sylvain Gugger committed
210
    :members: call
Patrick von Platen's avatar
Patrick von Platen committed
211
212
213


TFLongformerForQuestionAnswering
Sylvain Gugger's avatar
Sylvain Gugger committed
214
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
215
216

.. autoclass:: transformers.TFLongformerForQuestionAnswering
Sylvain Gugger's avatar
Sylvain Gugger committed
217
    :members: call
Patrick von Platen's avatar
Patrick von Platen committed
218

219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239

TFLongformerForSequenceClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.TFLongformerForSequenceClassification
    :members: call


TFLongformerForTokenClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.TFLongformerForTokenClassification
    :members: call


TFLongformerForMultipleChoice
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.TFLongformerForMultipleChoice
    :members: call