glossary.rst 14.7 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
.. 
    Copyright 2020 The HuggingFace Team. All rights reserved.

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
    the License. You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
    an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
    specific language governing permissions and limitations under the License.

Lysandre's avatar
Lysandre committed
13
Glossary
Sylvain Gugger's avatar
Sylvain Gugger committed
14
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sylvain Gugger's avatar
Sylvain Gugger committed
15
16

General terms
Sylvain Gugger's avatar
Sylvain Gugger committed
17
-----------------------------------------------------------------------------------------------------------------------
Sylvain Gugger's avatar
Sylvain Gugger committed
18
19
20
21

- autoencoding models: see MLM
- autoregressive models: see CLM
- CLM: causal language modeling, a pretraining task where the model reads the texts in order and has to predict the
22
  next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future
Sylvain Gugger's avatar
Sylvain Gugger committed
23
24
25
  tokens at a certain timestep.
- MLM: masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done
  by masking some tokens randomly, and has to predict the original text.
George Ho's avatar
George Ho committed
26
- multimodal: a task that combines texts with another kind of inputs (for instance images).
Sylvain Gugger's avatar
Sylvain Gugger committed
27
28
29
30
31
32
- NLG: natural language generation, all tasks related to generating text ( for instance talk with transformers,
  translation)
- NLP: natural language processing, a generic way to say "deal with texts".
- NLU: natural language understanding, all tasks related to understanding what is in a text (for instance classifying
  the whole text, individual words)
- pretrained model: a model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods
33
  involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or
Sylvain Gugger's avatar
Sylvain Gugger committed
34
35
36
37
38
39
40
41
  masking some words and trying to predict them (see MLM).
- RNN: recurrent neural network, a type of model that uses a loop over a layer to process texts.
- seq2seq or sequence-to-sequence: models that generate a new sequence from an input, like translation models, or
  summarization models (such as :doc:`Bart </model_doc/bart>` or :doc:`T5 </model_doc/t5>`).
- token: a part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords)
  or a punctuation symbol.

Model inputs
Sylvain Gugger's avatar
Sylvain Gugger committed
42
-----------------------------------------------------------------------------------------------------------------------
Lysandre's avatar
Lysandre committed
43
44
45
46

Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are
detailed here alongside usage examples.

Sylvain Gugger's avatar
Sylvain Gugger committed
47
48
.. _input-ids:

Lysandre's avatar
Lysandre committed
49
Input IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
50
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
51
52
53
54
55
56
57

The input ids are often the only required parameters to be passed to the model as input. *They are token indices,
numerical representations of tokens building the sequences that will be used as input by the model*.

Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
tokenizer, which is a `WordPiece <https://arxiv.org/pdf/1609.08144.pdf>`__ tokenizer:

Sylvain Gugger's avatar
Sylvain Gugger committed
58
.. code-block::
Lysandre's avatar
Lysandre committed
59

60
61
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
Lysandre's avatar
Lysandre committed
62

63
    >>> sequence = "A Titan RTX has 24GB of VRAM"
Lysandre's avatar
Lysandre committed
64
65
66

The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.

Sylvain Gugger's avatar
Sylvain Gugger committed
67
.. code-block::
Lysandre's avatar
Lysandre committed
68

69
    >>> tokenized_sequence = tokenizer.tokenize(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
70
71

The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
Sylvain Gugger's avatar
Sylvain Gugger committed
72
73
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix
is added for "RA" and "M":
Sylvain Gugger's avatar
Sylvain Gugger committed
74

Sylvain Gugger's avatar
Sylvain Gugger committed
75
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
76

77
    >>> print(tokenized_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
78
    ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
Lysandre's avatar
Lysandre committed
79

Sylvain Gugger's avatar
Sylvain Gugger committed
80
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
Sylvain Gugger's avatar
Sylvain Gugger committed
81
82
the sentence to the tokenizer, which leverages the Rust implementation of `huggingface/tokenizers
<https://github.com/huggingface/tokenizers>`__ for peak performance.
Lysandre's avatar
Lysandre committed
83

Sylvain Gugger's avatar
Sylvain Gugger committed
84
.. code-block::
Lysandre's avatar
Lysandre committed
85

86
    >>> inputs = tokenizer(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
87
88
89
90

The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key "input_ids":

Sylvain Gugger's avatar
Sylvain Gugger committed
91
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
92

93
    >>> encoded_sequence = inputs["input_ids"]
94
    >>> print(encoded_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
95
96
    [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]

97
98
99
100
Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
IDs the model sometimes uses.

If we decode the previous sequence of ids,
Sylvain Gugger's avatar
Sylvain Gugger committed
101

Sylvain Gugger's avatar
Sylvain Gugger committed
102
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
103

104
    >>> decoded_sequence = tokenizer.decode(encoded_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
105

106
we will see
Sylvain Gugger's avatar
Sylvain Gugger committed
107

Sylvain Gugger's avatar
Sylvain Gugger committed
108
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
109

110
111
    >>> print(decoded_sequence)
    [CLS] A Titan RTX has 24GB of VRAM [SEP]
Lysandre's avatar
Lysandre committed
112

Sylvain Gugger's avatar
Sylvain Gugger committed
113
114
115
because this is the way a :class:`~transformers.BertModel` is going to expect its inputs.

.. _attention-mask:
Lysandre's avatar
Lysandre committed
116
117

Attention mask
Sylvain Gugger's avatar
Sylvain Gugger committed
118
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
119

Sylvain Gugger's avatar
Sylvain Gugger committed
120
121
The attention mask is an optional argument used when batching sequences together. This argument indicates to the model
which tokens should be attended to, and which should not.
Lysandre's avatar
Lysandre committed
122
123
124

For example, consider these two sequences:

Sylvain Gugger's avatar
Sylvain Gugger committed
125
.. code-block::
Lysandre's avatar
Lysandre committed
126

127
128
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
129

130
131
    >>> sequence_a = "This is a short sequence."
    >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
Lysandre's avatar
Lysandre committed
132

133
134
    >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
    >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
Sylvain Gugger's avatar
Sylvain Gugger committed
135
136

The encoded versions have different lengths:
Lysandre's avatar
Lysandre committed
137

Sylvain Gugger's avatar
Sylvain Gugger committed
138
.. code-block::
Lysandre's avatar
Lysandre committed
139

140
    >>> len(encoded_sequence_a), len(encoded_sequence_b)
Sylvain Gugger's avatar
Sylvain Gugger committed
141
    (8, 19)
Lysandre's avatar
Lysandre committed
142

Harry Wang's avatar
Harry Wang committed
143
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
Sylvain Gugger's avatar
Sylvain Gugger committed
144
145
146
147
of the second one, or the second one needs to be truncated down to the length of the first one.

In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
it to pad like this:
Lysandre's avatar
Lysandre committed
148

Sylvain Gugger's avatar
Sylvain Gugger committed
149
.. code-block::
Lysandre's avatar
Lysandre committed
150

151
    >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
Sylvain Gugger's avatar
Sylvain Gugger committed
152
153

We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
Lysandre's avatar
Lysandre committed
154

Sylvain Gugger's avatar
Sylvain Gugger committed
155
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
156

157
158
    >>> padded_sequences["input_ids"]
    [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
Lysandre's avatar
Lysandre committed
159

Sylvain Gugger's avatar
Sylvain Gugger committed
160
161
162
163
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
position of the padded indices so that the model does not attend to them. For the :class:`~transformers.BertTokenizer`,
:obj:`1` indicates a value that should be attended to, while :obj:`0` indicates a padded value. This attention mask is
in the dictionary returned by the tokenizer under the key "attention_mask":
Lysandre's avatar
Lysandre committed
164

Sylvain Gugger's avatar
Sylvain Gugger committed
165
.. code-block::
Lysandre's avatar
Lysandre committed
166

167
168
    >>> padded_sequences["attention_mask"]
    [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
Lysandre's avatar
Lysandre committed
169

Sylvain Gugger's avatar
Sylvain Gugger committed
170
.. _token-type-ids:
Lysandre's avatar
Lysandre committed
171
172

Token Type IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
173
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
174
175

Some models' purpose is to do sequence classification or question answering. These require two different sequences to
Sylvain Gugger's avatar
Sylvain Gugger committed
176
177
178
be joined in a single "input_ids" entry, which usually is performed with the help of special tokens, such as the
classifier (``[CLS]``) and separator (``[SEP]``) tokens. For example, the BERT model builds its two sequence input as
such:
Lysandre's avatar
Lysandre committed
179

Sylvain Gugger's avatar
Sylvain Gugger committed
180
.. code-block::
Lysandre's avatar
Lysandre committed
181

182
   >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
183

Sylvain Gugger's avatar
Sylvain Gugger committed
184
185
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to ``tokenizer`` as two
arguments (and not a list, like before) like this:
Lysandre's avatar
Lysandre committed
186

Sylvain Gugger's avatar
Sylvain Gugger committed
187
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
188

189
190
191
192
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
    >>> sequence_a = "HuggingFace is based in NYC"
    >>> sequence_b = "Where is HuggingFace based?"
Lysandre's avatar
Lysandre committed
193

194
195
    >>> encoded_dict = tokenizer(sequence_a, sequence_b)
    >>> decoded = tokenizer.decode(encoded_dict["input_ids"])
Sylvain Gugger's avatar
Sylvain Gugger committed
196
197
198

which will return:

Sylvain Gugger's avatar
Sylvain Gugger committed
199
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
200

201
202
    >>> print(decoded)
    [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
Lysandre's avatar
Lysandre committed
203

204
This is enough for some models to understand where one sequence ends and where another begins. However, other models,
Sylvain Gugger's avatar
Sylvain Gugger committed
205
206
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying
the two types of sequence in the model.
Lysandre's avatar
Lysandre committed
207

208
The tokenizer returns this mask as the "token_type_ids" entry:
Lysandre's avatar
Lysandre committed
209

Sylvain Gugger's avatar
Sylvain Gugger committed
210
.. code-block::
Lysandre's avatar
Lysandre committed
211

212
    >>> encoded_dict['token_type_ids']
Sylvain Gugger's avatar
Sylvain Gugger committed
213
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
Lysandre's avatar
Lysandre committed
214

215
216
217
218
The first sequence, the "context" used for the question, has all its tokens represented by a :obj:`0`, whereas the
second sequence, corresponding to the "question", has all its tokens represented by a :obj:`1`.

Some models, like :class:`~transformers.XLNetModel` use an additional token represented by a :obj:`2`.
Lysandre's avatar
Lysandre committed
219

Sylvain Gugger's avatar
Sylvain Gugger committed
220
.. _position-ids:
Lysandre's avatar
Lysandre committed
221
222

Position IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
223
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
224

Sylvain Gugger's avatar
Sylvain Gugger committed
225
226
227
Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of
each token. Therefore, the position IDs (``position_ids``) are used by the model to identify each token's position in
the list of tokens.
Lysandre's avatar
Lysandre committed
228

Sylvain Gugger's avatar
Sylvain Gugger committed
229
230
They are an optional parameter. If no ``position_ids`` is passed to the model, the IDs are automatically created as
absolute positional embeddings.
Lysandre's avatar
Lysandre committed
231

Sylvain Gugger's avatar
Sylvain Gugger committed
232
233
Absolute positional embeddings are selected in the range ``[0, config.max_position_embeddings - 1]``. Some models use
other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
Patrick von Platen's avatar
Patrick von Platen committed
234

235
236
237
238
239
240
.. _labels:

Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
Sylvain Gugger's avatar
Sylvain Gugger committed
241
242
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its
predictions and the expected value (the label).
243
244
245

These labels are different according to the model head, for example:

Sylvain Gugger's avatar
Sylvain Gugger committed
246
247
- For sequence classification models (e.g., :class:`~transformers.BertForSequenceClassification`), the model expects a
  tensor of dimension :obj:`(batch_size)` with each value of the batch corresponding to the expected label of the
248
  entire sequence.
Sylvain Gugger's avatar
Sylvain Gugger committed
249
250
251
252
253
254
- For token classification models (e.g., :class:`~transformers.BertForTokenClassification`), the model expects a tensor
  of dimension :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each individual
  token.
- For masked language modeling (e.g., :class:`~transformers.BertForMaskedLM`), the model expects a tensor of dimension
  :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each individual token: the
  labels being the token ID for the masked token, and values to be ignored for the rest (usually -100).
255
- For sequence to sequence tasks,(e.g., :class:`~transformers.BartForConditionalGeneration`,
Sylvain Gugger's avatar
Sylvain Gugger committed
256
257
258
259
260
  :class:`~transformers.MBartForConditionalGeneration`), the model expects a tensor of dimension :obj:`(batch_size,
  tgt_seq_length)` with each value corresponding to the target sequences associated with each input sequence. During
  training, both `BART` and `T5` will make the appropriate `decoder_input_ids` and decoder attention masks internally.
  They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. See
  the documentation of each model for more information on each specific model's labels.
261

Sylvain Gugger's avatar
Sylvain Gugger committed
262
263
The base models (e.g., :class:`~transformers.BertModel`) do not accept labels, as these are the base transformer
models, simply outputting features.
264
265
266
267
268
269

.. _decoder-input-ids:

Decoder input IDs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Sylvain Gugger's avatar
Sylvain Gugger committed
270
271
272
This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These
inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a
way specific to each model.
273

Sylvain Gugger's avatar
Sylvain Gugger committed
274
275
Most encoder-decoder models (BART, T5) create their :obj:`decoder_input_ids` on their own from the :obj:`labels`. In
such models, passing the :obj:`labels` is the preferred way to handle training.
276
277
278

Please check each model's docs to see how they handle these input IDs for sequence to sequence training.

Sylvain Gugger's avatar
Sylvain Gugger committed
279
.. _feed-forward-chunking:
Patrick von Platen's avatar
Patrick von Platen committed
280
281

Feed Forward Chunking
Sylvain Gugger's avatar
Sylvain Gugger committed
282
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
283

284
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
Sylvain Gugger's avatar
Sylvain Gugger committed
285
286
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for
``bert-base-uncased``).
Sylvain Gugger's avatar
Sylvain Gugger committed
287
288
289
290
291

For an input of size ``[batch_size, sequence_length]``, the memory required to store the intermediate feed forward
embeddings ``[batch_size, sequence_length, config.intermediate_size]`` can account for a large fraction of the memory
use. The authors of `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`_ noticed that since the
computation is independent of the ``sequence_length`` dimension, it is mathematically equivalent to compute the output
292
embeddings of both feed forward layers ``[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n``
Sylvain Gugger's avatar
Sylvain Gugger committed
293
294
295
individually and concat them afterward to ``[batch_size, sequence_length, config.hidden_size]`` with ``n =
sequence_length``, which trades increased computation time against reduced memory use, but yields a mathematically
**equivalent** result.
Sylvain Gugger's avatar
Sylvain Gugger committed
296
297
298

For models employing the function :func:`~.transformers.apply_chunking_to_forward`, the ``chunk_size`` defines the
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
Sylvain Gugger's avatar
Sylvain Gugger committed
299
complexity. If ``chunk_size`` is set to 0, no feed forward chunking is done.