glossary.rst 9.97 KB
Newer Older
Lysandre's avatar
Lysandre committed
1
Glossary
Sylvain Gugger's avatar
Sylvain Gugger committed
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
^^^^^^^^

General terms
-------------

- autoencoding models: see MLM
- autoregressive models: see CLM
- CLM: causal language modeling, a pretraining task where the model reads the texts in order and has to predict the
  next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future 
  tokens at a certain timestep.
- MLM: masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done
  by masking some tokens randomly, and has to predict the original text.
- multimodal: a task taht combines texts with another kind of inputs (for instance images).
- NLG: natural language generation, all tasks related to generating text ( for instance talk with transformers,
  translation)
- NLP: natural language processing, a generic way to say "deal with texts".
- NLU: natural language understanding, all tasks related to understanding what is in a text (for instance classifying
  the whole text, individual words)
- pretrained model: a model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods
  involve a self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or 
  masking some words and trying to predict them (see MLM).
- RNN: recurrent neural network, a type of model that uses a loop over a layer to process texts.
- seq2seq or sequence-to-sequence: models that generate a new sequence from an input, like translation models, or
  summarization models (such as :doc:`Bart </model_doc/bart>` or :doc:`T5 </model_doc/t5>`).
- token: a part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords)
  or a punctuation symbol.

Model inputs
------------
Lysandre's avatar
Lysandre committed
31
32
33
34

Every model is different yet bears similarities with the others. Therefore most models use the same inputs, which are
detailed here alongside usage examples.

Sylvain Gugger's avatar
Sylvain Gugger committed
35
36
.. _input-ids:

Lysandre's avatar
Lysandre committed
37
Input IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
38
~~~~~~~~~
Lysandre's avatar
Lysandre committed
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57

The input ids are often the only required parameters to be passed to the model as input. *They are token indices,
numerical representations of tokens building the sequences that will be used as input by the model*.

Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
tokenizer, which is a `WordPiece <https://arxiv.org/pdf/1609.08144.pdf>`__ tokenizer:

::

    from transformers import BertTokenizer
    tokenizer = BertTokenizer.from_pretrained("bert-base-cased")

    sequence = "A Titan RTX has 24GB of VRAM"

The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.

::

    tokenized_sequence = tokenizer.tokenize(sequence)
Sylvain Gugger's avatar
Sylvain Gugger committed
58
59
60
61
62
63
64
65
66
    print(tokenized_sequence)

The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-dash is
added for "RA" and "M":

::

    ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
Lysandre's avatar
Lysandre committed
67

Sylvain Gugger's avatar
Sylvain Gugger committed
68
69
These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
the sentence to the tokenizer, which leverages the Rust implementation of
Lysandre's avatar
Lysandre committed
70
71
72
73
`huggingface/tokenizers <https://github.com/huggingface/tokenizers>`__ for peak performance.

::

Sylvain Gugger's avatar
Sylvain Gugger committed
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
    encoded_sequence = tokenizer(sequence)["input_ids"]
    print(encoded_sequence)

The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key "input_ids":

::

    [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]

Note that the tokenizer automatically adds "special tokens" (if the associated model rely on them) which are special
IDs the model sometimes uses. If we decode the previous sequence of ids,

::

    tokenizer.decode(encoded_sequence)

we will see 

::

    '[CLS] A Titan RTX has 24GB of VRAM [SEP]'
Lysandre's avatar
Lysandre committed
96

Sylvain Gugger's avatar
Sylvain Gugger committed
97
98
99
because this is the way a :class:`~transformers.BertModel` is going to expect its inputs.

.. _attention-mask:
Lysandre's avatar
Lysandre committed
100
101

Attention mask
Sylvain Gugger's avatar
Sylvain Gugger committed
102
~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
103
104
105
106
107
108
109
110

The attention mask is an optional argument used when batching sequences together. This argument indicates to the
model which tokens should be attended to, and which should not.

For example, consider these two sequences:

::

111
112
113
    from transformers import BertTokenizer
    tokenizer = BertTokenizer.from_pretrained("bert-base-cased")

Lysandre's avatar
Lysandre committed
114
115
116
    sequence_a = "This is a short sequence."
    sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."

Sylvain Gugger's avatar
Sylvain Gugger committed
117
118
119
120
121
122
    encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
    encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
    
    len(encoded_sequence_a), len(encoded_sequence_b)

The encoded versions have different lengths:
Lysandre's avatar
Lysandre committed
123

Sylvain Gugger's avatar
Sylvain Gugger committed
124
::
Lysandre's avatar
Lysandre committed
125

Sylvain Gugger's avatar
Sylvain Gugger committed
126
    (8, 19)
Lysandre's avatar
Lysandre committed
127

Sylvain Gugger's avatar
Sylvain Gugger committed
128
129
130
131
132
Therefore, we can't be put then together in a same tensor as-is. The first sequence needs to be padded up to the length
of the second one, or the second one needs to be truncated down to the length of the first one.

In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
it to pad like this:
Lysandre's avatar
Lysandre committed
133
134
135

::

Sylvain Gugger's avatar
Sylvain Gugger committed
136
137
138
139
    padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
    padded_sequences["input_ids"]

We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:
Lysandre's avatar
Lysandre committed
140

Sylvain Gugger's avatar
Sylvain Gugger committed
141
142
143
144
::

    [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
     [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
Lysandre's avatar
Lysandre committed
145

Sylvain Gugger's avatar
Sylvain Gugger committed
146
This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating
Lysandre's avatar
Lysandre committed
147
148
the position of the padded indices so that the model does not attend to them. For the
:class:`~transformers.BertTokenizer`, :obj:`1` indicate a value that should be attended to while :obj:`0` indicate
Sylvain Gugger's avatar
Sylvain Gugger committed
149
a padded value. This attention mask is in the dictionary returned by the tokenizer under the key "attention_mask":
Lysandre's avatar
Lysandre committed
150
151
152

::

Sylvain Gugger's avatar
Sylvain Gugger committed
153
154
155
156
157
    padded_sequences["attention_mask"]

will give back

::
Lysandre's avatar
Lysandre committed
158

Sylvain Gugger's avatar
Sylvain Gugger committed
159
160
    [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
     [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
Lysandre's avatar
Lysandre committed
161

Sylvain Gugger's avatar
Sylvain Gugger committed
162
.. _token-type-ids:
Lysandre's avatar
Lysandre committed
163
164

Token Type IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
165
~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
166
167
168
169
170
171
172

Some models' purpose is to do sequence classification or question answering. These require two different sequences to
be encoded in the same input IDs. They are usually separated by special tokens, such as the classifier and separator
tokens. For example, the BERT model builds its two sequence input as such:

::

Sylvain Gugger's avatar
Sylvain Gugger committed
173
   # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
174

Sylvain Gugger's avatar
Sylvain Gugger committed
175
176
We can use our tokenizer to automatically generate such a sentence by passing the two sequences as two arguments (and
not a list like before) like this:
Lysandre's avatar
Lysandre committed
177

Sylvain Gugger's avatar
Sylvain Gugger committed
178
179
180
181
::

    from transformers import BertTokenizer
    tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
Lysandre's avatar
Lysandre committed
182
183
184
    sequence_a = "HuggingFace is based in NYC"
    sequence_b = "Where is HuggingFace based?"

Sylvain Gugger's avatar
Sylvain Gugger committed
185
186
187
188
189
190
191
192
    encoded_dict = tokenizer(sequence_a, sequence_b)
    tokenizer.decode(encoded_dict["input_ids"])

which will return:

::

    "[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]"
Lysandre's avatar
Lysandre committed
193
194

This is enough for some models to understand where one sequence ends and where another begins. However, other models
Sylvain Gugger's avatar
Sylvain Gugger committed
195
196
such as BERT have an additional mechanism, which are the token type IDs (also called segment IDs). They are a binary
mask identifying the different sequences in the model.
Lysandre's avatar
Lysandre committed
197

Sylvain Gugger's avatar
Sylvain Gugger committed
198
The tokenizer returns in the dictionary under the key "token_type_ids":
Lysandre's avatar
Lysandre committed
199
200
201

::

Sylvain Gugger's avatar
Sylvain Gugger committed
202
    encoded_dict['token_type_ids']
Lysandre's avatar
Lysandre committed
203

Sylvain Gugger's avatar
Sylvain Gugger committed
204
205
206
207
208
will return

::

    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
Lysandre's avatar
Lysandre committed
209
210
211
212
213

The first sequence, the "context" used for the question, has all its tokens represented by :obj:`0`, whereas the
question has all its tokens represented by :obj:`1`. Some models, like :class:`~transformers.XLNetModel` use an
additional token represented by a :obj:`2`.

Sylvain Gugger's avatar
Sylvain Gugger committed
214
.. _position-ids:
Lysandre's avatar
Lysandre committed
215
216

Position IDs
Sylvain Gugger's avatar
Sylvain Gugger committed
217
~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
218
219
220
221
222
223
224
225
226
227

The position IDs are used by the model to identify which token is at which position. Contrary to RNNs that have the
position of each token embedded within them, transformers are unaware of the position of each token. The position
IDs are created for this purpose.

They are an optional parameter. If no position IDs are passed to the model, they are automatically created as absolute
positional embeddings.

Absolute positional embeddings are selected in the range ``[0, config.max_position_embeddings - 1]``. Some models
use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.
Patrick von Platen's avatar
Patrick von Platen committed
228

Sylvain Gugger's avatar
Sylvain Gugger committed
229
.. _feed-forward-chunking:
Patrick von Platen's avatar
Patrick von Platen committed
230
231

Feed Forward Chunking
Sylvain Gugger's avatar
Sylvain Gugger committed
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
~~~~~~~~~~~~~~~~~~~~~

In transformers two feed forward layers usually follows the self attention layer in each residual attention block.
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g.,
for ``bert-base-uncased``). 

For an input of size ``[batch_size, sequence_length]``, the memory required to store the intermediate feed forward
embeddings ``[batch_size, sequence_length, config.intermediate_size]`` can account for a large fraction of the memory
use. The authors of `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`_ noticed that since the
computation is independent of the ``sequence_length`` dimension, it is mathematically equivalent to compute the output
embeddings of both feed forward layers ``[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n`` 
individually and concat them afterward to ``[batch_size, sequence_length, config.hidden_size]`` with
``n = sequence_length``, which trades increased computation time against reduced memory use, but yields a
mathematically **equivalent** result.

For models employing the function :func:`~.transformers.apply_chunking_to_forward`, the ``chunk_size`` defines the
number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time
complexity.  If ``chunk_size`` is set to 0, no feed forward chunking is done.