glossary.rst 14.1 KB
Newer Older
Lysandre's avatar
Lysandre committed
1
Glossary
Sylvain Gugger's avatar
Sylvain Gugger committed
2
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sylvain Gugger's avatar
Sylvain Gugger committed
3
4

General terms
Sylvain Gugger's avatar
Sylvain Gugger committed
5
-----------------------------------------------------------------------------------------------------------------------
Sylvain Gugger's avatar
Sylvain Gugger committed
6
7
8
9

- autoencoding models: see MLM
- autoregressive models: see CLM
- CLM: causal language modeling, a pretraining task where the model reads the texts in order and has to predict the
10
  next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future
Sylvain Gugger's avatar
Sylvain Gugger committed
11
12
13
  tokens at a certain timestep.
- MLM: masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done
  by masking some tokens randomly, and has to predict the original text.
George Ho's avatar
George Ho committed
14
- multimodal: a task that combines texts with another kind of inputs (for instance images).
Sylvain Gugger's avatar
Sylvain Gugger committed
15
16
17
18
19
20
- NLG: natural language generation, all tasks related to generating text ( for instance talk with transformers,
  translation)
- NLP: natural language processing, a generic way to say "deal with texts".
- NLU: natural language understanding, all tasks related to understanding what is in a text (for instance classifying
  the whole text, individual words)
- pretrained model: a model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods
21
  involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or
Sylvain Gugger's avatar
Sylvain Gugger committed
22
23
24
25
26
27
28
29
  masking some words and trying to predict them (see MLM).
- RNN: recurrent neural network, a type of model that uses a loop over a layer to process texts.
- seq2seq or sequence-to-sequence: models that generate a new sequence from an input, like translation models, or
  summarization models (such as :doc:`Bart </model_doc/bart>` or :doc:`T5 </model_doc/t5>`).
- token: a part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords)
  or a punctuation symbol.

Model inputs
Sylvain Gugger's avatar
Sylvain Gugger committed
30
-----------------------------------------------------------------------------------------------------------------------
Lysandre's avatar
Lysandre committed
31
32
33
34

Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are
detailed here alongside usage examples.

Sylvain Gugger's avatar
Sylvain Gugger committed
35
36
.. _input-ids:

Lysandre's avatar
Lysandre committed
37
Input IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
38
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
39
40
41
42
43
44
45

The input ids are often the only required parameters to be passed to the model as input. *They are token indices,
numerical representations of tokens building the sequences that will be used as input by the model*.

Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
tokenizer, which is a `WordPiece <https://arxiv.org/pdf/1609.08144.pdf>`__ tokenizer:

Sylvain Gugger's avatar
Sylvain Gugger committed
46
.. code-block::
Lysandre's avatar
Lysandre committed
47

48
49
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
Lysandre's avatar
Lysandre committed
50

51
    >>> sequence = "A Titan RTX has 24GB of VRAM"
Lysandre's avatar
Lysandre committed
52
53
54

The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.

Sylvain Gugger's avatar
Sylvain Gugger committed
55
.. code-block::
Lysandre's avatar
Lysandre committed
56

57
    >>> tokenized_sequence = tokenizer.tokenize(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
58
59

The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
60
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix is
Sylvain Gugger's avatar
Sylvain Gugger committed
61
62
added for "RA" and "M":

Sylvain Gugger's avatar
Sylvain Gugger committed
63
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
64

65
    >>> print(tokenized_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
66
    ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
Lysandre's avatar
Lysandre committed
67

Sylvain Gugger's avatar
Sylvain Gugger committed
68
69
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
the sentence to the tokenizer, which leverages the Rust implementation of
Lysandre's avatar
Lysandre committed
70
71
`huggingface/tokenizers <https://github.com/huggingface/tokenizers>`__ for peak performance.

Sylvain Gugger's avatar
Sylvain Gugger committed
72
.. code-block::
Lysandre's avatar
Lysandre committed
73

74
    >>> inputs = tokenizer(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
75
76
77
78

The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key "input_ids":

Sylvain Gugger's avatar
Sylvain Gugger committed
79
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
80

81
    >>> encoded_sequence = inputs["input_ids"]
82
    >>> print(encoded_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
83
84
    [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]

85
86
87
88
Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
IDs the model sometimes uses.

If we decode the previous sequence of ids,
Sylvain Gugger's avatar
Sylvain Gugger committed
89

Sylvain Gugger's avatar
Sylvain Gugger committed
90
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
91

92
    >>> decoded_sequence = tokenizer.decode(encoded_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
93

94
we will see
Sylvain Gugger's avatar
Sylvain Gugger committed
95

Sylvain Gugger's avatar
Sylvain Gugger committed
96
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
97

98
99
    >>> print(decoded_sequence)
    [CLS] A Titan RTX has 24GB of VRAM [SEP]
Lysandre's avatar
Lysandre committed
100

Sylvain Gugger's avatar
Sylvain Gugger committed
101
102
103
because this is the way a :class:`~transformers.BertModel` is going to expect its inputs.

.. _attention-mask:
Lysandre's avatar
Lysandre committed
104
105

Attention mask
Sylvain Gugger's avatar
Sylvain Gugger committed
106
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
107
108
109
110
111
112

The attention mask is an optional argument used when batching sequences together. This argument indicates to the
model which tokens should be attended to, and which should not.

For example, consider these two sequences:

Sylvain Gugger's avatar
Sylvain Gugger committed
113
.. code-block::
Lysandre's avatar
Lysandre committed
114

115
116
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
117

118
119
    >>> sequence_a = "This is a short sequence."
    >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
Lysandre's avatar
Lysandre committed
120

121
122
    >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
    >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
Sylvain Gugger's avatar
Sylvain Gugger committed
123
124

The encoded versions have different lengths:
Lysandre's avatar
Lysandre committed
125

Sylvain Gugger's avatar
Sylvain Gugger committed
126
.. code-block::
Lysandre's avatar
Lysandre committed
127

128
    >>> len(encoded_sequence_a), len(encoded_sequence_b)
Sylvain Gugger's avatar
Sylvain Gugger committed
129
    (8, 19)
Lysandre's avatar
Lysandre committed
130

Harry Wang's avatar
Harry Wang committed
131
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
Sylvain Gugger's avatar
Sylvain Gugger committed
132
133
134
135
of the second one, or the second one needs to be truncated down to the length of the first one.

In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
it to pad like this:
Lysandre's avatar
Lysandre committed
136

Sylvain Gugger's avatar
Sylvain Gugger committed
137
.. code-block::
Lysandre's avatar
Lysandre committed
138

139
    >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
Sylvain Gugger's avatar
Sylvain Gugger committed
140
141

We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
Lysandre's avatar
Lysandre committed
142

Sylvain Gugger's avatar
Sylvain Gugger committed
143
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
144

145
146
    >>> padded_sequences["input_ids"]
    [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
Lysandre's avatar
Lysandre committed
147

Sylvain Gugger's avatar
Sylvain Gugger committed
148
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating
Lysandre's avatar
Lysandre committed
149
the position of the padded indices so that the model does not attend to them. For the
150
:class:`~transformers.BertTokenizer`, :obj:`1` indicates a value that should be attended to, while :obj:`0` indicates
Sylvain Gugger's avatar
Sylvain Gugger committed
151
a padded value. This attention mask is in the dictionary returned by the tokenizer under the key "attention_mask":
Lysandre's avatar
Lysandre committed
152

Sylvain Gugger's avatar
Sylvain Gugger committed
153
.. code-block::
Lysandre's avatar
Lysandre committed
154

155
156
    >>> padded_sequences["attention_mask"]
    [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
Lysandre's avatar
Lysandre committed
157

Sylvain Gugger's avatar
Sylvain Gugger committed
158
.. _token-type-ids:
Lysandre's avatar
Lysandre committed
159
160

Token Type IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
161
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
162
163

Some models' purpose is to do sequence classification or question answering. These require two different sequences to
164
be joined in a single "input_ids" entry, which usually is performed with the help of special tokens, such as the classifier (``[CLS]``) and separator (``[SEP]``)
Lysandre's avatar
Lysandre committed
165
166
tokens. For example, the BERT model builds its two sequence input as such:

Sylvain Gugger's avatar
Sylvain Gugger committed
167
.. code-block::
Lysandre's avatar
Lysandre committed
168

169
   >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
170

171
172
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to ``tokenizer`` as two arguments (and
not a list, like before) like this:
Lysandre's avatar
Lysandre committed
173

Sylvain Gugger's avatar
Sylvain Gugger committed
174
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
175

176
177
178
179
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
    >>> sequence_a = "HuggingFace is based in NYC"
    >>> sequence_b = "Where is HuggingFace based?"
Lysandre's avatar
Lysandre committed
180

181
182
    >>> encoded_dict = tokenizer(sequence_a, sequence_b)
    >>> decoded = tokenizer.decode(encoded_dict["input_ids"])
Sylvain Gugger's avatar
Sylvain Gugger committed
183
184
185

which will return:

Sylvain Gugger's avatar
Sylvain Gugger committed
186
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
187

188
189
    >>> print(decoded)
    [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
Lysandre's avatar
Lysandre committed
190

191
192
193
This is enough for some models to understand where one sequence ends and where another begins. However, other models,
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary
mask identifying the two types of sequence in the model.
Lysandre's avatar
Lysandre committed
194

195
The tokenizer returns this mask as the "token_type_ids" entry:
Lysandre's avatar
Lysandre committed
196

Sylvain Gugger's avatar
Sylvain Gugger committed
197
.. code-block::
Lysandre's avatar
Lysandre committed
198

199
    >>> encoded_dict['token_type_ids']
Sylvain Gugger's avatar
Sylvain Gugger committed
200
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
Lysandre's avatar
Lysandre committed
201

202
203
204
205
The first sequence, the "context" used for the question, has all its tokens represented by a :obj:`0`, whereas the
second sequence, corresponding to the "question", has all its tokens represented by a :obj:`1`.

Some models, like :class:`~transformers.XLNetModel` use an additional token represented by a :obj:`2`.
Lysandre's avatar
Lysandre committed
206

Sylvain Gugger's avatar
Sylvain Gugger committed
207
.. _position-ids:
Lysandre's avatar
Lysandre committed
208
209

Position IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
210
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
211

212
213
Contrary to RNNs that have the position of each token embedded within them,
transformers are unaware of the position of each token. Therefore, the position IDs (``position_ids``) are used by the model to identify each token's position in the list of tokens.
Lysandre's avatar
Lysandre committed
214

215
They are an optional parameter. If no ``position_ids`` is passed to the model, the IDs are automatically created as absolute
Lysandre's avatar
Lysandre committed
216
217
218
219
positional embeddings.

Absolute positional embeddings are selected in the range ``[0, config.max_position_embeddings - 1]``. Some models
use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
Patrick von Platen's avatar
Patrick von Platen committed
220

221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
.. _labels:

Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between
its predictions and the expected value (the label).

These labels are different according to the model head, for example:

- For sequence classification models (e.g., :class:`~transformers.BertForSequenceClassification`), the model expects
  a tensor of dimension :obj:`(batch_size)` with each value of the batch corresponding to the expected label of the
  entire sequence.
- For token classification models (e.g., :class:`~transformers.BertForTokenClassification`), the model expects
  a tensor of dimension :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each
  individual token.
- For masked language modeling (e.g., :class:`~transformers.BertForMaskedLM`), the model expects
  a tensor of dimension :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each
  individual token: the labels being the token ID for the masked token, and values to be ignored for the rest (usually
  -100).
- For sequence to sequence tasks,(e.g., :class:`~transformers.BartForConditionalGeneration`,
  :class:`~transformers.MBartForConditionalGeneration`), the model expects a tensor of dimension
  :obj:`(batch_size, tgt_seq_length)` with each value corresponding to the target sequences associated with each
  input sequence. During training, both `BART` and `T5` will make the appropriate `decoder_input_ids` and decoder
  attention masks internally. They usually do not need to be supplied. This does not apply to models leveraging the
  Encoder-Decoder framework.
  See the documentation of each model for more information on each specific model's labels.

The base models (e.g., :class:`~transformers.BertModel`) do not accept labels, as these are the base transformer models,
simply outputting features.

.. _decoder-input-ids:

Decoder input IDs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder.
These inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually
built in a way specific to each model.

Most encoder-decoder models (BART, T5) create their :obj:`decoder_input_ids` on their own from the :obj:`labels`.
In such models, passing the :obj:`labels` is the preferred way to handle training.

Please check each model's docs to see how they handle these input IDs for sequence to sequence training.

Sylvain Gugger's avatar
Sylvain Gugger committed
267
.. _feed-forward-chunking:
Patrick von Platen's avatar
Patrick von Platen committed
268
269

Feed Forward Chunking
Sylvain Gugger's avatar
Sylvain Gugger committed
270
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
271

272
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
Sylvain Gugger's avatar
Sylvain Gugger committed
273
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g.,
274
for ``bert-base-uncased``).
Sylvain Gugger's avatar
Sylvain Gugger committed
275
276
277
278
279

For an input of size ``[batch_size, sequence_length]``, the memory required to store the intermediate feed forward
embeddings ``[batch_size, sequence_length, config.intermediate_size]`` can account for a large fraction of the memory
use. The authors of `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`_ noticed that since the
computation is independent of the ``sequence_length`` dimension, it is mathematically equivalent to compute the output
280
embeddings of both feed forward layers ``[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n``
Sylvain Gugger's avatar
Sylvain Gugger committed
281
282
283
284
285
286
287
individually and concat them afterward to ``[batch_size, sequence_length, config.hidden_size]`` with
``n = sequence_length``, which trades increased computation time against reduced memory use, but yields a
mathematically **equivalent** result.

For models employing the function :func:`~.transformers.apply_chunking_to_forward`, the ``chunk_size`` defines the
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
complexity.  If ``chunk_size`` is set to 0, no feed forward chunking is done.