glossary.rst 15.7 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
.. 
    Copyright 2020 The HuggingFace Team. All rights reserved.

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
    the License. You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
    an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
    specific language governing permissions and limitations under the License.

Lysandre's avatar
Lysandre committed
13
Glossary
Sylvain Gugger's avatar
Sylvain Gugger committed
14
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sylvain Gugger's avatar
Sylvain Gugger committed
15
16

General terms
Sylvain Gugger's avatar
Sylvain Gugger committed
17
-----------------------------------------------------------------------------------------------------------------------
Sylvain Gugger's avatar
Sylvain Gugger committed
18
19
20
21

- autoencoding models: see MLM
- autoregressive models: see CLM
- CLM: causal language modeling, a pretraining task where the model reads the texts in order and has to predict the
22
  next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future
Sylvain Gugger's avatar
Sylvain Gugger committed
23
  tokens at a certain timestep.
24
- deep learning: machine learning algorithms which uses neural networks with several layers.
Sylvain Gugger's avatar
Sylvain Gugger committed
25
26
- MLM: masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done
  by masking some tokens randomly, and has to predict the original text.
George Ho's avatar
George Ho committed
27
- multimodal: a task that combines texts with another kind of inputs (for instance images).
28
29
- NLG: natural language generation, all tasks related to generating text (for instance talk with transformers,
  translation).
Sylvain Gugger's avatar
Sylvain Gugger committed
30
31
- NLP: natural language processing, a generic way to say "deal with texts".
- NLU: natural language understanding, all tasks related to understanding what is in a text (for instance classifying
32
  the whole text, individual words).
Sylvain Gugger's avatar
Sylvain Gugger committed
33
- pretrained model: a model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods
34
  involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or
Sylvain Gugger's avatar
Sylvain Gugger committed
35
36
  masking some words and trying to predict them (see MLM).
- RNN: recurrent neural network, a type of model that uses a loop over a layer to process texts.
37
- self-attention: each element of the input finds out which other elements of the input they should attend to.
Sylvain Gugger's avatar
Sylvain Gugger committed
38
39
40
41
- seq2seq or sequence-to-sequence: models that generate a new sequence from an input, like translation models, or
  summarization models (such as :doc:`Bart </model_doc/bart>` or :doc:`T5 </model_doc/t5>`).
- token: a part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords)
  or a punctuation symbol.
42
- transformer: self-attention based deep learning model architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
43
44

Model inputs
Sylvain Gugger's avatar
Sylvain Gugger committed
45
-----------------------------------------------------------------------------------------------------------------------
Lysandre's avatar
Lysandre committed
46
47
48
49

Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are
detailed here alongside usage examples.

Sylvain Gugger's avatar
Sylvain Gugger committed
50
51
.. _input-ids:

Lysandre's avatar
Lysandre committed
52
Input IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
53
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
54
55
56
57

The input ids are often the only required parameters to be passed to the model as input. *They are token indices,
numerical representations of tokens building the sequences that will be used as input by the model*.

58
59
60
61
62
63
.. raw:: html

   <iframe width="560" height="315" src="https://www.youtube.com/embed/VFp38yj8h3A" title="YouTube video player"
   frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
   picture-in-picture" allowfullscreen></iframe>

Lysandre's avatar
Lysandre committed
64
65
66
Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
tokenizer, which is a `WordPiece <https://arxiv.org/pdf/1609.08144.pdf>`__ tokenizer:

Sylvain Gugger's avatar
Sylvain Gugger committed
67
.. code-block::
Lysandre's avatar
Lysandre committed
68

69
70
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
Lysandre's avatar
Lysandre committed
71

72
    >>> sequence = "A Titan RTX has 24GB of VRAM"
Lysandre's avatar
Lysandre committed
73
74
75

The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.

Sylvain Gugger's avatar
Sylvain Gugger committed
76
.. code-block::
Lysandre's avatar
Lysandre committed
77

78
    >>> tokenized_sequence = tokenizer.tokenize(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
79
80

The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
Sylvain Gugger's avatar
Sylvain Gugger committed
81
82
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix
is added for "RA" and "M":
Sylvain Gugger's avatar
Sylvain Gugger committed
83

Sylvain Gugger's avatar
Sylvain Gugger committed
84
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
85

86
    >>> print(tokenized_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
87
    ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
Lysandre's avatar
Lysandre committed
88

Sylvain Gugger's avatar
Sylvain Gugger committed
89
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
Sylvain Gugger's avatar
Sylvain Gugger committed
90
91
the sentence to the tokenizer, which leverages the Rust implementation of `huggingface/tokenizers
<https://github.com/huggingface/tokenizers>`__ for peak performance.
Lysandre's avatar
Lysandre committed
92

Sylvain Gugger's avatar
Sylvain Gugger committed
93
.. code-block::
Lysandre's avatar
Lysandre committed
94

95
    >>> inputs = tokenizer(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
96
97
98
99

The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key "input_ids":

Sylvain Gugger's avatar
Sylvain Gugger committed
100
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
101

102
    >>> encoded_sequence = inputs["input_ids"]
103
    >>> print(encoded_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
104
105
    [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]

106
107
108
109
Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
IDs the model sometimes uses.

If we decode the previous sequence of ids,
Sylvain Gugger's avatar
Sylvain Gugger committed
110

Sylvain Gugger's avatar
Sylvain Gugger committed
111
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
112

113
    >>> decoded_sequence = tokenizer.decode(encoded_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
114

115
we will see
Sylvain Gugger's avatar
Sylvain Gugger committed
116

Sylvain Gugger's avatar
Sylvain Gugger committed
117
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
118

119
120
    >>> print(decoded_sequence)
    [CLS] A Titan RTX has 24GB of VRAM [SEP]
Lysandre's avatar
Lysandre committed
121

Sylvain Gugger's avatar
Sylvain Gugger committed
122
123
124
because this is the way a :class:`~transformers.BertModel` is going to expect its inputs.

.. _attention-mask:
Lysandre's avatar
Lysandre committed
125
126

Attention mask
Sylvain Gugger's avatar
Sylvain Gugger committed
127
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
128

129
130
131
132
133
134
135
136
137
The attention mask is an optional argument used when batching sequences together.

.. raw:: html

   <iframe width="560" height="315" src="https://www.youtube.com/embed/M6adb1j2jPI" title="YouTube video player"
   frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
   picture-in-picture" allowfullscreen></iframe>

This argument indicates to the model which tokens should be attended to, and which should not.
Lysandre's avatar
Lysandre committed
138
139
140

For example, consider these two sequences:

Sylvain Gugger's avatar
Sylvain Gugger committed
141
.. code-block::
Lysandre's avatar
Lysandre committed
142

143
144
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
145

146
147
    >>> sequence_a = "This is a short sequence."
    >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
Lysandre's avatar
Lysandre committed
148

149
150
    >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
    >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
Sylvain Gugger's avatar
Sylvain Gugger committed
151
152

The encoded versions have different lengths:
Lysandre's avatar
Lysandre committed
153

Sylvain Gugger's avatar
Sylvain Gugger committed
154
.. code-block::
Lysandre's avatar
Lysandre committed
155

156
    >>> len(encoded_sequence_a), len(encoded_sequence_b)
Sylvain Gugger's avatar
Sylvain Gugger committed
157
    (8, 19)
Lysandre's avatar
Lysandre committed
158

Harry Wang's avatar
Harry Wang committed
159
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
Sylvain Gugger's avatar
Sylvain Gugger committed
160
161
162
163
of the second one, or the second one needs to be truncated down to the length of the first one.

In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
it to pad like this:
Lysandre's avatar
Lysandre committed
164

Sylvain Gugger's avatar
Sylvain Gugger committed
165
.. code-block::
Lysandre's avatar
Lysandre committed
166

167
    >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
Sylvain Gugger's avatar
Sylvain Gugger committed
168
169

We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
Lysandre's avatar
Lysandre committed
170

Sylvain Gugger's avatar
Sylvain Gugger committed
171
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
172

173
174
    >>> padded_sequences["input_ids"]
    [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
Lysandre's avatar
Lysandre committed
175

Sylvain Gugger's avatar
Sylvain Gugger committed
176
177
178
179
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
position of the padded indices so that the model does not attend to them. For the :class:`~transformers.BertTokenizer`,
:obj:`1` indicates a value that should be attended to, while :obj:`0` indicates a padded value. This attention mask is
in the dictionary returned by the tokenizer under the key "attention_mask":
Lysandre's avatar
Lysandre committed
180

Sylvain Gugger's avatar
Sylvain Gugger committed
181
.. code-block::
Lysandre's avatar
Lysandre committed
182

183
184
    >>> padded_sequences["attention_mask"]
    [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
Lysandre's avatar
Lysandre committed
185

Sylvain Gugger's avatar
Sylvain Gugger committed
186
.. _token-type-ids:
Lysandre's avatar
Lysandre committed
187
188

Token Type IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
189
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
190

191
192
193
194
195
196
197
198
199
200
201
Some models' purpose is to do classification on pairs of sentences or question answering.

.. raw:: html

   <iframe width="560" height="315" src="https://www.youtube.com/embed/0u3ioSwev3s" title="YouTube video player"
   frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
   picture-in-picture" allowfullscreen></iframe>

These require two different sequences to be joined in a single "input_ids" entry, which usually is performed with the
help of special tokens, such as the classifier (``[CLS]``) and separator (``[SEP]``) tokens. For example, the BERT
model builds its two sequence input as such:
Lysandre's avatar
Lysandre committed
202

Sylvain Gugger's avatar
Sylvain Gugger committed
203
.. code-block::
Lysandre's avatar
Lysandre committed
204

205
    >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
206

Sylvain Gugger's avatar
Sylvain Gugger committed
207
208
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to ``tokenizer`` as two
arguments (and not a list, like before) like this:
Lysandre's avatar
Lysandre committed
209

Sylvain Gugger's avatar
Sylvain Gugger committed
210
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
211

212
213
214
215
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
    >>> sequence_a = "HuggingFace is based in NYC"
    >>> sequence_b = "Where is HuggingFace based?"
Lysandre's avatar
Lysandre committed
216

217
218
    >>> encoded_dict = tokenizer(sequence_a, sequence_b)
    >>> decoded = tokenizer.decode(encoded_dict["input_ids"])
Sylvain Gugger's avatar
Sylvain Gugger committed
219
220
221

which will return:

Sylvain Gugger's avatar
Sylvain Gugger committed
222
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
223

224
225
    >>> print(decoded)
    [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
Lysandre's avatar
Lysandre committed
226

227
This is enough for some models to understand where one sequence ends and where another begins. However, other models,
Sylvain Gugger's avatar
Sylvain Gugger committed
228
229
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying
the two types of sequence in the model.
Lysandre's avatar
Lysandre committed
230

231
The tokenizer returns this mask as the "token_type_ids" entry:
Lysandre's avatar
Lysandre committed
232

Sylvain Gugger's avatar
Sylvain Gugger committed
233
.. code-block::
Lysandre's avatar
Lysandre committed
234

235
    >>> encoded_dict['token_type_ids']
Sylvain Gugger's avatar
Sylvain Gugger committed
236
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
Lysandre's avatar
Lysandre committed
237

238
239
240
241
The first sequence, the "context" used for the question, has all its tokens represented by a :obj:`0`, whereas the
second sequence, corresponding to the "question", has all its tokens represented by a :obj:`1`.

Some models, like :class:`~transformers.XLNetModel` use an additional token represented by a :obj:`2`.
Lysandre's avatar
Lysandre committed
242

Sylvain Gugger's avatar
Sylvain Gugger committed
243
.. _position-ids:
Lysandre's avatar
Lysandre committed
244
245

Position IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
246
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
247

Sylvain Gugger's avatar
Sylvain Gugger committed
248
249
250
Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of
each token. Therefore, the position IDs (``position_ids``) are used by the model to identify each token's position in
the list of tokens.
Lysandre's avatar
Lysandre committed
251

252
They are an optional parameter. If no ``position_ids`` are passed to the model, the IDs are automatically created as
Sylvain Gugger's avatar
Sylvain Gugger committed
253
absolute positional embeddings.
Lysandre's avatar
Lysandre committed
254

Sylvain Gugger's avatar
Sylvain Gugger committed
255
256
Absolute positional embeddings are selected in the range ``[0, config.max_position_embeddings - 1]``. Some models use
other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
Patrick von Platen's avatar
Patrick von Platen committed
257

258
259
260
261
262
263
.. _labels:

Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
Sylvain Gugger's avatar
Sylvain Gugger committed
264
265
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its
predictions and the expected value (the label).
266
267
268

These labels are different according to the model head, for example:

Sylvain Gugger's avatar
Sylvain Gugger committed
269
270
- For sequence classification models (e.g., :class:`~transformers.BertForSequenceClassification`), the model expects a
  tensor of dimension :obj:`(batch_size)` with each value of the batch corresponding to the expected label of the
271
  entire sequence.
Sylvain Gugger's avatar
Sylvain Gugger committed
272
273
274
275
276
277
- For token classification models (e.g., :class:`~transformers.BertForTokenClassification`), the model expects a tensor
  of dimension :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each individual
  token.
- For masked language modeling (e.g., :class:`~transformers.BertForMaskedLM`), the model expects a tensor of dimension
  :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each individual token: the
  labels being the token ID for the masked token, and values to be ignored for the rest (usually -100).
278
- For sequence to sequence tasks,(e.g., :class:`~transformers.BartForConditionalGeneration`,
Sylvain Gugger's avatar
Sylvain Gugger committed
279
280
281
282
283
  :class:`~transformers.MBartForConditionalGeneration`), the model expects a tensor of dimension :obj:`(batch_size,
  tgt_seq_length)` with each value corresponding to the target sequences associated with each input sequence. During
  training, both `BART` and `T5` will make the appropriate `decoder_input_ids` and decoder attention masks internally.
  They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. See
  the documentation of each model for more information on each specific model's labels.
284

Sylvain Gugger's avatar
Sylvain Gugger committed
285
286
The base models (e.g., :class:`~transformers.BertModel`) do not accept labels, as these are the base transformer
models, simply outputting features.
287
288
289
290
291
292

.. _decoder-input-ids:

Decoder input IDs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Sylvain Gugger's avatar
Sylvain Gugger committed
293
294
295
This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These
inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a
way specific to each model.
296

Sylvain Gugger's avatar
Sylvain Gugger committed
297
298
Most encoder-decoder models (BART, T5) create their :obj:`decoder_input_ids` on their own from the :obj:`labels`. In
such models, passing the :obj:`labels` is the preferred way to handle training.
299
300
301

Please check each model's docs to see how they handle these input IDs for sequence to sequence training.

Sylvain Gugger's avatar
Sylvain Gugger committed
302
.. _feed-forward-chunking:
Patrick von Platen's avatar
Patrick von Platen committed
303
304

Feed Forward Chunking
Sylvain Gugger's avatar
Sylvain Gugger committed
305
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
306

307
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
Sylvain Gugger's avatar
Sylvain Gugger committed
308
309
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for
``bert-base-uncased``).
Sylvain Gugger's avatar
Sylvain Gugger committed
310
311
312
313
314

For an input of size ``[batch_size, sequence_length]``, the memory required to store the intermediate feed forward
embeddings ``[batch_size, sequence_length, config.intermediate_size]`` can account for a large fraction of the memory
use. The authors of `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`_ noticed that since the
computation is independent of the ``sequence_length`` dimension, it is mathematically equivalent to compute the output
315
embeddings of both feed forward layers ``[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n``
Sylvain Gugger's avatar
Sylvain Gugger committed
316
317
318
individually and concat them afterward to ``[batch_size, sequence_length, config.hidden_size]`` with ``n =
sequence_length``, which trades increased computation time against reduced memory use, but yields a mathematically
**equivalent** result.
Sylvain Gugger's avatar
Sylvain Gugger committed
319
320
321

For models employing the function :func:`~.transformers.apply_chunking_to_forward`, the ``chunk_size`` defines the
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
Sylvain Gugger's avatar
Sylvain Gugger committed
322
complexity. If ``chunk_size`` is set to 0, no feed forward chunking is done.