glossary.mdx 13.3 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Glossary

Steven Liu's avatar
Steven Liu committed
15
16
This glossary defines general machine learning and 馃 Transformers terms to help you better understand the
documentation.
Sylvain Gugger's avatar
Sylvain Gugger committed
17

Steven Liu's avatar
Steven Liu committed
18
## A
Stas Bekman's avatar
Stas Bekman committed
19

Steven Liu's avatar
Steven Liu committed
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
### Attention mask

The attention mask is an optional argument used when batching sequences together.

<Youtube id="M6adb1j2jPI"/>

This argument indicates to the model which tokens should be attended to, and which should not.

For example, consider these two sequences:

```python
>>> from transformers import BertTokenizer

>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")

>>> sequence_a = "This is a short sequence."
>>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."

>>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
>>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"]
```

The encoded versions have different lengths:

```python
>>> len(encoded_sequence_a), len(encoded_sequence_b)
(8, 19)
```

Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length
of the second one, or the second one needs to be truncated down to the length of the first one.

In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask
it to pad like this:

```python
>>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)
```

We can see that 0s have been added on the right of the first sentence to make it the same length as the second one:

```python
>>> padded_sequences["input_ids"]
[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]
```

This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the
position of the padded indices so that the model does not attend to them. For the [`BertTokenizer`], `1` indicates a
value that should be attended to, while `0` indicates a padded value. This attention mask is in the dictionary returned
by the tokenizer under the key "attention_mask":

```python
>>> padded_sequences["attention_mask"]
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
```

### autoencoding models 

see [MLM](#mlm)

### autoregressive models

see [CLM](#clm)

## C

### CLM

Causal language modeling, a pretraining task where the model reads the texts in order and has to predict the next word.
It's usually done by reading the whole sentence but using a mask inside the model to hide the future tokens at a
certain timestep.

## D

### Decoder input IDs

This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These
inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a
way specific to each model.

Most encoder-decoder models (BART, T5) create their `decoder_input_ids` on their own from the `labels`. In such models,
passing the `labels` is the preferred way to handle training.

Please check each model's docs to see how they handle these input IDs for sequence to sequence training.

### deep learning

Machine learning algorithms which uses neural networks with several layers.

## F

### Feed Forward Chunking

In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers.
The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for
`bert-base-uncased`).

For an input of size `[batch_size, sequence_length]`, the memory required to store the intermediate feed forward
embeddings `[batch_size, sequence_length, config.intermediate_size]` can account for a large fraction of the memory
use. The authors of [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) noticed that since the
computation is independent of the `sequence_length` dimension, it is mathematically equivalent to compute the output
embeddings of both feed forward layers `[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n`
individually and concat them afterward to `[batch_size, sequence_length, config.hidden_size]` with `n =
sequence_length`, which trades increased computation time against reduced memory use, but yields a mathematically
**equivalent** result.

For models employing the function [`apply_chunking_to_forward`], the `chunk_size` defines the number of output
embeddings that are computed in parallel and thus defines the trade-off between memory and time complexity. If
`chunk_size` is set to 0, no feed forward chunking is done.

## I
Sylvain Gugger's avatar
Sylvain Gugger committed
131
132
133
134
135
136
137
138
139
140
141
142
143

### Input IDs

The input ids are often the only required parameters to be passed to the model as input. *They are token indices,
numerical representations of tokens building the sequences that will be used as input by the model*.

<Youtube id="VFp38yj8h3A"/>

Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT
tokenizer, which is a [WordPiece](https://arxiv.org/pdf/1609.08144.pdf) tokenizer:

```python
>>> from transformers import BertTokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
144

Sylvain Gugger's avatar
Sylvain Gugger committed
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")

>>> sequence = "A Titan RTX has 24GB of VRAM"
```

The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary.

```python
>>> tokenized_sequence = tokenizer.tokenize(sequence)
```

The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split
in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix
is added for "RA" and "M":

```python
>>> print(tokenized_sequence)
['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M']
```

These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding
Steven Liu's avatar
Steven Liu committed
166
167
the sentence to the tokenizer, which leverages the Rust implementation of [馃
Tokenizers](https://github.com/huggingface/tokenizers) for peak performance.
Sylvain Gugger's avatar
Sylvain Gugger committed
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199

```python
>>> inputs = tokenizer(sequence)
```

The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The
token indices are under the key "input_ids":

```python
>>> encoded_sequence = inputs["input_ids"]
>>> print(encoded_sequence)
[101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102]
```

Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special
IDs the model sometimes uses.

If we decode the previous sequence of ids,

```python
>>> decoded_sequence = tokenizer.decode(encoded_sequence)
```

we will see

```python
>>> print(decoded_sequence)
[CLS] A Titan RTX has 24GB of VRAM [SEP]
```

because this is the way a [`BertModel`] is going to expect its inputs.

Steven Liu's avatar
Steven Liu committed
200
## L
Stas Bekman's avatar
Stas Bekman committed
201

Steven Liu's avatar
Steven Liu committed
202
### Labels
Sylvain Gugger's avatar
Sylvain Gugger committed
203

Steven Liu's avatar
Steven Liu committed
204
205
206
The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels
should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its
predictions and the expected value (the label).
Sylvain Gugger's avatar
Sylvain Gugger committed
207

Steven Liu's avatar
Steven Liu committed
208
These labels are different according to the model head, for example:
Sylvain Gugger's avatar
Sylvain Gugger committed
209

Steven Liu's avatar
Steven Liu committed
210
211
212
213
214
215
216
217
218
219
220
221
222
- For sequence classification models (e.g., [`BertForSequenceClassification`]), the model expects a tensor of dimension
  `(batch_size)` with each value of the batch corresponding to the expected label of the entire sequence.
- For token classification models (e.g., [`BertForTokenClassification`]), the model expects a tensor of dimension
  `(batch_size, seq_length)` with each value corresponding to the expected label of each individual token.
- For masked language modeling (e.g., [`BertForMaskedLM`]), the model expects a tensor of dimension `(batch_size,
  seq_length)` with each value corresponding to the expected label of each individual token: the labels being the token
  ID for the masked token, and values to be ignored for the rest (usually -100).
- For sequence to sequence tasks,(e.g., [`BartForConditionalGeneration`], [`MBartForConditionalGeneration`]), the model
  expects a tensor of dimension `(batch_size, tgt_seq_length)` with each value corresponding to the target sequences
  associated with each input sequence. During training, both *BART* and *T5* will make the appropriate
  *decoder_input_ids* and decoder attention masks internally. They usually do not need to be supplied. This does not
  apply to models leveraging the Encoder-Decoder framework. See the documentation of each model for more information on
  each specific model's labels.
Sylvain Gugger's avatar
Sylvain Gugger committed
223

Steven Liu's avatar
Steven Liu committed
224
225
The base models (e.g., [`BertModel`]) do not accept labels, as these are the base transformer models, simply outputting
features.
Sylvain Gugger's avatar
Sylvain Gugger committed
226

Steven Liu's avatar
Steven Liu committed
227
## M
Sylvain Gugger's avatar
Sylvain Gugger committed
228

Steven Liu's avatar
Steven Liu committed
229
### MLM
Sylvain Gugger's avatar
Sylvain Gugger committed
230

Steven Liu's avatar
Steven Liu committed
231
232
Masked language modeling, a pretraining task where the model sees a corrupted version of the texts, usually done by
masking some tokens randomly, and has to predict the original text.
Sylvain Gugger's avatar
Sylvain Gugger committed
233

Steven Liu's avatar
Steven Liu committed
234
### multimodal
Sylvain Gugger's avatar
Sylvain Gugger committed
235

Steven Liu's avatar
Steven Liu committed
236
A task that combines texts with another kind of inputs (for instance images).
Sylvain Gugger's avatar
Sylvain Gugger committed
237

Steven Liu's avatar
Steven Liu committed
238
## N
Sylvain Gugger's avatar
Sylvain Gugger committed
239

Steven Liu's avatar
Steven Liu committed
240
### NLG
Sylvain Gugger's avatar
Sylvain Gugger committed
241

Steven Liu's avatar
Steven Liu committed
242
Natural language generation, all tasks related to generating text (for instance talk with transformers, translation).
Sylvain Gugger's avatar
Sylvain Gugger committed
243

Steven Liu's avatar
Steven Liu committed
244
### NLP
Sylvain Gugger's avatar
Sylvain Gugger committed
245

Steven Liu's avatar
Steven Liu committed
246
Natural language processing, a generic way to say "deal with texts".
Sylvain Gugger's avatar
Sylvain Gugger committed
247

Steven Liu's avatar
Steven Liu committed
248
### NLU
Sylvain Gugger's avatar
Sylvain Gugger committed
249

Steven Liu's avatar
Steven Liu committed
250
251
Natural language understanding, all tasks related to understanding what is in a text (for instance classifying the
whole text, individual words).
Sylvain Gugger's avatar
Sylvain Gugger committed
252

Steven Liu's avatar
Steven Liu committed
253
## P
Sylvain Gugger's avatar
Sylvain Gugger committed
254

Steven Liu's avatar
Steven Liu committed
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
### Position IDs

Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of
each token. Therefore, the position IDs (`position_ids`) are used by the model to identify each token's position in the
list of tokens.

They are an optional parameter. If no `position_ids` are passed to the model, the IDs are automatically created as
absolute positional embeddings.

Absolute positional embeddings are selected in the range `[0, config.max_position_embeddings - 1]`. Some models use
other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings.


### pretrained model

A model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods involve a
self-supervised objective, which can be reading the text and trying to predict the next word (see CLM) or masking some
words and trying to predict them (see MLM).

## R

### RNN

Recurrent neural network, a type of model that uses a loop over a layer to process texts.

## S

### self-attention

Each element of the input finds out which other elements of the input they should attend to.

### seq2seq or sequence-to-sequence

Models that generate a new sequence from an input, like translation models, or summarization models (such as
[Bart](model_doc/bart) or [T5](model_doc/t5)).
Sylvain Gugger's avatar
Sylvain Gugger committed
290

Steven Liu's avatar
Steven Liu committed
291
## T
Stas Bekman's avatar
Stas Bekman committed
292

Steven Liu's avatar
Steven Liu committed
293
294
295
296
### token

A part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords) or a
punctuation symbol.
Sylvain Gugger's avatar
Sylvain Gugger committed
297
298
299
300
301
302
303
304

### Token Type IDs

Some models' purpose is to do classification on pairs of sentences or question answering.

<Youtube id="0u3ioSwev3s"/>

These require two different sequences to be joined in a single "input_ids" entry, which usually is performed with the
Steven Liu's avatar
Steven Liu committed
305
306
help of special tokens, such as the classifier (`[CLS]`) and separator (`[SEP]`) tokens. For example, the BERT model
builds its two sequence input as such:
Sylvain Gugger's avatar
Sylvain Gugger committed
307
308
309
310
311
312
313
314
315
316

```python
>>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP]
```

We can use our tokenizer to automatically generate such a sentence by passing the two sequences to `tokenizer` as two
arguments (and not a list, like before) like this:

```python
>>> from transformers import BertTokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
317

Sylvain Gugger's avatar
Sylvain Gugger committed
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
>>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
>>> sequence_a = "HuggingFace is based in NYC"
>>> sequence_b = "Where is HuggingFace based?"

>>> encoded_dict = tokenizer(sequence_a, sequence_b)
>>> decoded = tokenizer.decode(encoded_dict["input_ids"])
```

which will return:

```python
>>> print(decoded)
[CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP]
```

This is enough for some models to understand where one sequence ends and where another begins. However, other models,
such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying
the two types of sequence in the model.

The tokenizer returns this mask as the "token_type_ids" entry:

```python
Sylvain Gugger's avatar
Sylvain Gugger committed
340
>>> encoded_dict["token_type_ids"]
Sylvain Gugger's avatar
Sylvain Gugger committed
341
342
343
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```

Steven Liu's avatar
Steven Liu committed
344
345
The first sequence, the "context" used for the question, has all its tokens represented by a `0`, whereas the second
sequence, corresponding to the "question", has all its tokens represented by a `1`.
Sylvain Gugger's avatar
Sylvain Gugger committed
346
347
348

Some models, like [`XLNetModel`] use an additional token represented by a `2`.

Steven Liu's avatar
Steven Liu committed
349
### transformer
Stas Bekman's avatar
Stas Bekman committed
350

Steven Liu's avatar
Steven Liu committed
351
Self-attention based deep learning model architecture.