glossary.rst 14.9 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
.. 
    Copyright 2020 The HuggingFace Team. All rights reserved.

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
    the License. You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
    an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
    specific language governing permissions and limitations under the License.

Lysandre's avatar
Lysandre committed
13
Glossary
Sylvain Gugger's avatar
Sylvain Gugger committed
14
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sylvain Gugger's avatar
Sylvain Gugger committed
15
16

General terms
Sylvain Gugger's avatar
Sylvain Gugger committed
17
-----------------------------------------------------------------------------------------------------------------------
Sylvain Gugger's avatar
Sylvain Gugger committed
18
19
20
21

- autoencoding models: see MLM
- autoregressive models: see CLM
- CLM: causal language modeling, a pretraining task where the model reads the texts in order and has to predict the
22
  next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future
Sylvain Gugger's avatar
Sylvain Gugger committed
23
  tokens at a certain timestep.
24
- deep learning: machine learning algorithms which uses neural networks with several layers.
Sylvain Gugger's avatar
Sylvain Gugger committed
25
26
- MLM: masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done
  by masking some tokens randomly, and has to predict the original text.
George Ho's avatar
George Ho committed
27
- multimodal: a task that combines texts with another kind of inputs (for instance images).
28
29
- NLG: natural language generation, all tasks related to generating text (for instance talk with transformers,
  translation).
Sylvain Gugger's avatar
Sylvain Gugger committed
30
31
- NLP: natural language processing, a generic way to say "deal with texts".
- NLU: natural language understanding, all tasks related to understanding what is in a text (for instance classifying
32
  the whole text, individual words).
Sylvain Gugger's avatar
Sylvain Gugger committed
33
- pretrained model: a model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods
34
  involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or
Sylvain Gugger's avatar
Sylvain Gugger committed
35
36
  masking some words and trying to predict them (see MLM).
- RNN: recurrent neural network, a type of model that uses a loop over a layer to process texts.
37
- self-attention: each element of the input finds out which other elements of the input they should attend to.
Sylvain Gugger's avatar
Sylvain Gugger committed
38
39
40
41
- seq2seq or sequence-to-sequence: models that generate a new sequence from an input, like translation models, or
  summarization models (such as :doc:`Bart </model_doc/bart>` or :doc:`T5 </model_doc/t5>`).
- token: a part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords)
  or a punctuation symbol.
42
- transformer: self-attention based deep learning model architecture.
Sylvain Gugger's avatar
Sylvain Gugger committed
43
44

Model inputs
Sylvain Gugger's avatar
Sylvain Gugger committed
45
-----------------------------------------------------------------------------------------------------------------------
Lysandre's avatar
Lysandre committed
46
47
48
49

Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are
detailed here alongside usage examples.

Sylvain Gugger's avatar
Sylvain Gugger committed
50
51
.. _input-ids:

Lysandre's avatar
Lysandre committed
52
Input IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
53
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
54
55
56
57
58
59
60

The input ids are often the only required parameters to be passed to the model as input. *They are token indices,
numerical representations of tokens building the sequences that will be used as input by the model*.

Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
tokenizer, which is a `WordPiece <https://arxiv.org/pdf/1609.08144.pdf>`__ tokenizer:

Sylvain Gugger's avatar
Sylvain Gugger committed
61
.. code-block::
Lysandre's avatar
Lysandre committed
62

63
64
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
Lysandre's avatar
Lysandre committed
65

66
    >>> sequence = "A Titan RTX has 24GB of VRAM"
Lysandre's avatar
Lysandre committed
67
68
69

The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.

Sylvain Gugger's avatar
Sylvain Gugger committed
70
.. code-block::
Lysandre's avatar
Lysandre committed
71

72
    >>> tokenized_sequence = tokenizer.tokenize(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
73
74

The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
Sylvain Gugger's avatar
Sylvain Gugger committed
75
76
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix
is added for "RA" and "M":
Sylvain Gugger's avatar
Sylvain Gugger committed
77

Sylvain Gugger's avatar
Sylvain Gugger committed
78
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
79

80
    >>> print(tokenized_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
81
    ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
Lysandre's avatar
Lysandre committed
82

Sylvain Gugger's avatar
Sylvain Gugger committed
83
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
Sylvain Gugger's avatar
Sylvain Gugger committed
84
85
the sentence to the tokenizer, which leverages the Rust implementation of `huggingface/tokenizers
<https://github.com/huggingface/tokenizers>`__ for peak performance.
Lysandre's avatar
Lysandre committed
86

Sylvain Gugger's avatar
Sylvain Gugger committed
87
.. code-block::
Lysandre's avatar
Lysandre committed
88

89
    >>> inputs = tokenizer(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
90
91
92
93

The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key "input_ids":

Sylvain Gugger's avatar
Sylvain Gugger committed
94
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
95

96
    >>> encoded_sequence = inputs["input_ids"]
97
    >>> print(encoded_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
98
99
    [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]

100
101
102
103
Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
IDs the model sometimes uses.

If we decode the previous sequence of ids,
Sylvain Gugger's avatar
Sylvain Gugger committed
104

Sylvain Gugger's avatar
Sylvain Gugger committed
105
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
106

107
    >>> decoded_sequence = tokenizer.decode(encoded_sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
108

109
we will see
Sylvain Gugger's avatar
Sylvain Gugger committed
110

Sylvain Gugger's avatar
Sylvain Gugger committed
111
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
112

113
114
    >>> print(decoded_sequence)
    [CLS] A Titan RTX has 24GB of VRAM [SEP]
Lysandre's avatar
Lysandre committed
115

Sylvain Gugger's avatar
Sylvain Gugger committed
116
117
118
because this is the way a :class:`~transformers.BertModel` is going to expect its inputs.

.. _attention-mask:
Lysandre's avatar
Lysandre committed
119
120

Attention mask
Sylvain Gugger's avatar
Sylvain Gugger committed
121
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
122

Sylvain Gugger's avatar
Sylvain Gugger committed
123
124
The attention mask is an optional argument used when batching sequences together. This argument indicates to the model
which tokens should be attended to, and which should not.
Lysandre's avatar
Lysandre committed
125
126
127

For example, consider these two sequences:

Sylvain Gugger's avatar
Sylvain Gugger committed
128
.. code-block::
Lysandre's avatar
Lysandre committed
129

130
131
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
132

133
134
    >>> sequence_a = "This is a short sequence."
    >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."
Lysandre's avatar
Lysandre committed
135

136
137
    >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
    >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
Sylvain Gugger's avatar
Sylvain Gugger committed
138
139

The encoded versions have different lengths:
Lysandre's avatar
Lysandre committed
140

Sylvain Gugger's avatar
Sylvain Gugger committed
141
.. code-block::
Lysandre's avatar
Lysandre committed
142

143
    >>> len(encoded_sequence_a), len(encoded_sequence_b)
Sylvain Gugger's avatar
Sylvain Gugger committed
144
    (8, 19)
Lysandre's avatar
Lysandre committed
145

Harry Wang's avatar
Harry Wang committed
146
Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
Sylvain Gugger's avatar
Sylvain Gugger committed
147
148
149
150
of the second one, or the second one needs to be truncated down to the length of the first one.

In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
it to pad like this:
Lysandre's avatar
Lysandre committed
151

Sylvain Gugger's avatar
Sylvain Gugger committed
152
.. code-block::
Lysandre's avatar
Lysandre committed
153

154
    >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
Sylvain Gugger's avatar
Sylvain Gugger committed
155
156

We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
Lysandre's avatar
Lysandre committed
157

Sylvain Gugger's avatar
Sylvain Gugger committed
158
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
159

160
161
    >>> padded_sequences["input_ids"]
    [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
Lysandre's avatar
Lysandre committed
162

Sylvain Gugger's avatar
Sylvain Gugger committed
163
164
165
166
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
position of the padded indices so that the model does not attend to them. For the :class:`~transformers.BertTokenizer`,
:obj:`1` indicates a value that should be attended to, while :obj:`0` indicates a padded value. This attention mask is
in the dictionary returned by the tokenizer under the key "attention_mask":
Lysandre's avatar
Lysandre committed
167

Sylvain Gugger's avatar
Sylvain Gugger committed
168
.. code-block::
Lysandre's avatar
Lysandre committed
169

170
171
    >>> padded_sequences["attention_mask"]
    [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
Lysandre's avatar
Lysandre committed
172

Sylvain Gugger's avatar
Sylvain Gugger committed
173
.. _token-type-ids:
Lysandre's avatar
Lysandre committed
174
175

Token Type IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
176
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
177
178

Some models' purpose is to do sequence classification or question answering. These require two different sequences to
Sylvain Gugger's avatar
Sylvain Gugger committed
179
180
181
be joined in a single "input_ids" entry, which usually is performed with the help of special tokens, such as the
classifier (``[CLS]``) and separator (``[SEP]``) tokens. For example, the BERT model builds its two sequence input as
such:
Lysandre's avatar
Lysandre committed
182

Sylvain Gugger's avatar
Sylvain Gugger committed
183
.. code-block::
Lysandre's avatar
Lysandre committed
184

185
   >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
186

Sylvain Gugger's avatar
Sylvain Gugger committed
187
188
We can use our tokenizer to automatically generate such a sentence by passing the two sequences to ``tokenizer`` as two
arguments (and not a list, like before) like this:
Lysandre's avatar
Lysandre committed
189

Sylvain Gugger's avatar
Sylvain Gugger committed
190
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
191

192
193
194
195
    >>> from transformers import BertTokenizer
    >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
    >>> sequence_a = "HuggingFace is based in NYC"
    >>> sequence_b = "Where is HuggingFace based?"
Lysandre's avatar
Lysandre committed
196

197
198
    >>> encoded_dict = tokenizer(sequence_a, sequence_b)
    >>> decoded = tokenizer.decode(encoded_dict["input_ids"])
Sylvain Gugger's avatar
Sylvain Gugger committed
199
200
201

which will return:

Sylvain Gugger's avatar
Sylvain Gugger committed
202
.. code-block::
Sylvain Gugger's avatar
Sylvain Gugger committed
203

204
205
    >>> print(decoded)
    [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
Lysandre's avatar
Lysandre committed
206

207
This is enough for some models to understand where one sequence ends and where another begins. However, other models,
Sylvain Gugger's avatar
Sylvain Gugger committed
208
209
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying
the two types of sequence in the model.
Lysandre's avatar
Lysandre committed
210

211
The tokenizer returns this mask as the "token_type_ids" entry:
Lysandre's avatar
Lysandre committed
212

Sylvain Gugger's avatar
Sylvain Gugger committed
213
.. code-block::
Lysandre's avatar
Lysandre committed
214

215
    >>> encoded_dict['token_type_ids']
Sylvain Gugger's avatar
Sylvain Gugger committed
216
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
Lysandre's avatar
Lysandre committed
217

218
219
220
221
The first sequence, the "context" used for the question, has all its tokens represented by a :obj:`0`, whereas the
second sequence, corresponding to the "question", has all its tokens represented by a :obj:`1`.

Some models, like :class:`~transformers.XLNetModel` use an additional token represented by a :obj:`2`.
Lysandre's avatar
Lysandre committed
222

Sylvain Gugger's avatar
Sylvain Gugger committed
223
.. _position-ids:
Lysandre's avatar
Lysandre committed
224
225

Position IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
226
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
227

Sylvain Gugger's avatar
Sylvain Gugger committed
228
229
230
Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of
each token. Therefore, the position IDs (``position_ids``) are used by the model to identify each token's position in
the list of tokens.
Lysandre's avatar
Lysandre committed
231

232
They are an optional parameter. If no ``position_ids`` are passed to the model, the IDs are automatically created as
Sylvain Gugger's avatar
Sylvain Gugger committed
233
absolute positional embeddings.
Lysandre's avatar
Lysandre committed
234

Sylvain Gugger's avatar
Sylvain Gugger committed
235
236
Absolute positional embeddings are selected in the range ``[0, config.max_position_embeddings - 1]``. Some models use
other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
Patrick von Platen's avatar
Patrick von Platen committed
237

238
239
240
241
242
243
.. _labels:

Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
Sylvain Gugger's avatar
Sylvain Gugger committed
244
245
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its
predictions and the expected value (the label).
246
247
248

These labels are different according to the model head, for example:

Sylvain Gugger's avatar
Sylvain Gugger committed
249
250
- For sequence classification models (e.g., :class:`~transformers.BertForSequenceClassification`), the model expects a
  tensor of dimension :obj:`(batch_size)` with each value of the batch corresponding to the expected label of the
251
  entire sequence.
Sylvain Gugger's avatar
Sylvain Gugger committed
252
253
254
255
256
257
- For token classification models (e.g., :class:`~transformers.BertForTokenClassification`), the model expects a tensor
  of dimension :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each individual
  token.
- For masked language modeling (e.g., :class:`~transformers.BertForMaskedLM`), the model expects a tensor of dimension
  :obj:`(batch_size, seq_length)` with each value corresponding to the expected label of each individual token: the
  labels being the token ID for the masked token, and values to be ignored for the rest (usually -100).
258
- For sequence to sequence tasks,(e.g., :class:`~transformers.BartForConditionalGeneration`,
Sylvain Gugger's avatar
Sylvain Gugger committed
259
260
261
262
263
  :class:`~transformers.MBartForConditionalGeneration`), the model expects a tensor of dimension :obj:`(batch_size,
  tgt_seq_length)` with each value corresponding to the target sequences associated with each input sequence. During
  training, both `BART` and `T5` will make the appropriate `decoder_input_ids` and decoder attention masks internally.
  They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. See
  the documentation of each model for more information on each specific model's labels.
264

Sylvain Gugger's avatar
Sylvain Gugger committed
265
266
The base models (e.g., :class:`~transformers.BertModel`) do not accept labels, as these are the base transformer
models, simply outputting features.
267
268
269
270
271
272

.. _decoder-input-ids:

Decoder input IDs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Sylvain Gugger's avatar
Sylvain Gugger committed
273
274
275
This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These
inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a
way specific to each model.
276

Sylvain Gugger's avatar
Sylvain Gugger committed
277
278
Most encoder-decoder models (BART, T5) create their :obj:`decoder_input_ids` on their own from the :obj:`labels`. In
such models, passing the :obj:`labels` is the preferred way to handle training.
279
280
281

Please check each model's docs to see how they handle these input IDs for sequence to sequence training.

Sylvain Gugger's avatar
Sylvain Gugger committed
282
.. _feed-forward-chunking:
Patrick von Platen's avatar
Patrick von Platen committed
283
284

Feed Forward Chunking
Sylvain Gugger's avatar
Sylvain Gugger committed
285
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
286

287
In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
Sylvain Gugger's avatar
Sylvain Gugger committed
288
289
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for
``bert-base-uncased``).
Sylvain Gugger's avatar
Sylvain Gugger committed
290
291
292
293
294

For an input of size ``[batch_size, sequence_length]``, the memory required to store the intermediate feed forward
embeddings ``[batch_size, sequence_length, config.intermediate_size]`` can account for a large fraction of the memory
use. The authors of `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`_ noticed that since the
computation is independent of the ``sequence_length`` dimension, it is mathematically equivalent to compute the output
295
embeddings of both feed forward layers ``[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n``
Sylvain Gugger's avatar
Sylvain Gugger committed
296
297
298
individually and concat them afterward to ``[batch_size, sequence_length, config.hidden_size]`` with ``n =
sequence_length``, which trades increased computation time against reduced memory use, but yields a mathematically
**equivalent** result.
Sylvain Gugger's avatar
Sylvain Gugger committed
299
300
301

For models employing the function :func:`~.transformers.apply_chunking_to_forward`, the ``chunk_size`` defines the
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
Sylvain Gugger's avatar
Sylvain Gugger committed
302
complexity. If ``chunk_size`` is set to 0, no feed forward chunking is done.