quicktour.mdx 14.1 KB
Newer Older
Steven Liu's avatar
Steven Liu committed
1
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Sylvain Gugger's avatar
Sylvain Gugger committed
2
3
4
5
6
7
8
9
10
11
12
13
14

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Quick tour

15
16
[[open-in-colab]]

Steven Liu's avatar
Steven Liu committed
17
Get up and running with 馃 Transformers! Start using the [`pipeline`] for rapid inference, and quickly load a pretrained model and tokenizer with an [AutoClass](./model_doc/auto) to solve your text, vision or audio task.
Sylvain Gugger's avatar
Sylvain Gugger committed
18
19
20

<Tip>

Steven Liu's avatar
Steven Liu committed
21
22
All code examples presented in the documentation have a toggle on the top left for PyTorch and TensorFlow. If
not, the code is expected to work for both backends without any change.
Sylvain Gugger's avatar
Sylvain Gugger committed
23
24
25

</Tip>

Steven Liu's avatar
Steven Liu committed
26
## Pipeline
Sylvain Gugger's avatar
Sylvain Gugger committed
27

Steven Liu's avatar
Steven Liu committed
28
[`pipeline`] is the easiest way to use a pretrained model for a given task.
Sylvain Gugger's avatar
Sylvain Gugger committed
29
30
31

<Youtube id="tiZFewofSLM"/>

Steven Liu's avatar
Steven Liu committed
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
The [`pipeline`] supports many common tasks out-of-the-box:

**Text**:
* Sentiment analysis: classify the polarity of a given text.
* Text generation (in English): generate text from a given input.
* Name entity recognition (NER): label each word with the entity it represents (person, date, location, etc.).
* Question answering: extract the answer from the context, given some context and a question.
* Fill-mask: fill in the blank given a text with masked words.
* Summarization: generate a summary of a long sequence of text or document.
* Translation: translate text into another language.
* Feature extraction: create a tensor representation of the text.

**Image**:
* Image classification: classify an image.
* Image segmentation: classify every pixel in an image.
* Object detection: detect objects within an image.

**Audio**:
* Audio classification: assign a label to a given segment of audio.
* Automatic speech recognition (ASR): transcribe audio data into text.

<Tip>

For more details about the [`pipeline`] and associated tasks, refer to the documentation [here](./main_classes/pipelines).

</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
58

Steven Liu's avatar
Steven Liu committed
59
### Pipeline usage
Sylvain Gugger's avatar
Sylvain Gugger committed
60

Steven Liu's avatar
Steven Liu committed
61
In the following example, you will use the [`pipeline`] for sentiment analysis.
Sylvain Gugger's avatar
Sylvain Gugger committed
62

Steven Liu's avatar
Steven Liu committed
63
Install the following dependencies if you haven't already:
Sylvain Gugger's avatar
Sylvain Gugger committed
64
65
66
67
68
69
70

```bash
pip install torch
===PT-TF-SPLIT===
pip install tensorflow
```

Steven Liu's avatar
Steven Liu committed
71
72
Import [`pipeline`] and specify the task you want to complete:

Sylvain Gugger's avatar
Sylvain Gugger committed
73
74
```py
>>> from transformers import pipeline
Sylvain Gugger's avatar
Sylvain Gugger committed
75
76

>>> classifier = pipeline("sentiment-analysis")
Sylvain Gugger's avatar
Sylvain Gugger committed
77
78
```

Steven Liu's avatar
Steven Liu committed
79
The pipeline downloads and caches a default [pretrained model](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) and tokenizer for sentiment analysis. Now you can use the `classifier` on your target text:
Sylvain Gugger's avatar
Sylvain Gugger committed
80
81

```py
Sylvain Gugger's avatar
Sylvain Gugger committed
82
>>> classifier("We are very happy to show you the 馃 Transformers library.")
83
[{'label': 'POSITIVE', 'score': 0.9998}]
Sylvain Gugger's avatar
Sylvain Gugger committed
84
85
```

Steven Liu's avatar
Steven Liu committed
86
For more than one sentence, pass a list of sentences to the [`pipeline`] which returns a list of dictionaries:
Sylvain Gugger's avatar
Sylvain Gugger committed
87
88

```py
Sylvain Gugger's avatar
Sylvain Gugger committed
89
>>> results = classifier(["We are very happy to show you the 馃 Transformers library.", "We hope you don't hate it."])
Sylvain Gugger's avatar
Sylvain Gugger committed
90
91
92
93
94
95
>>> for result in results:
...     print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
label: POSITIVE, with score: 0.9998
label: NEGATIVE, with score: 0.5309
```

Steven Liu's avatar
Steven Liu committed
96
The [`pipeline`] can also iterate over an entire dataset. Start by installing the [馃 Datasets](https://huggingface.co/docs/datasets/) library:
Sylvain Gugger's avatar
Sylvain Gugger committed
97

Steven Liu's avatar
Steven Liu committed
98
99
100
```bash
pip install datasets 
```
Sylvain Gugger's avatar
Sylvain Gugger committed
101

Steven Liu's avatar
Steven Liu committed
102
Create a [`pipeline`] with the task you want to solve for and the model you want to use. Set the `device` parameter to `0` to place the tensors on a CUDA device:
Sylvain Gugger's avatar
Sylvain Gugger committed
103
104

```py
Steven Liu's avatar
Steven Liu committed
105
>>> from transformers import pipeline
Sylvain Gugger's avatar
Sylvain Gugger committed
106

Steven Liu's avatar
Steven Liu committed
107
108
>>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
109

Steven Liu's avatar
Steven Liu committed
110
Next, load a dataset (see the 馃 Datasets [Quick Start](https://huggingface.co/docs/datasets/quickstart.html) for more details) you'd like to iterate over. For example, let's load the [SUPERB](https://huggingface.co/datasets/superb) dataset:
Sylvain Gugger's avatar
Sylvain Gugger committed
111
112

```py
Steven Liu's avatar
Steven Liu committed
113
114
>>> import datasets

115
>>> dataset = datasets.load_dataset("superb", name="asr", split="test")  # doctest: +IGNORE_RESULT
Sylvain Gugger's avatar
Sylvain Gugger committed
116
117
```

118
You can pass a whole dataset pipeline:
Sylvain Gugger's avatar
Sylvain Gugger committed
119
120

```py
121
122
123
124
125
126
>>> files = dataset["file"]
>>> speech_recognizer(files[:4])
[{'text': 'HE HOPED THERE WOULD BE STEW FOR DINNER TURNIPS AND CARROTS AND BRUISED POTATOES AND FAT MUTTON PIECES TO BE LADLED OUT IN THICK PEPPERED FLOWER FAT AND SAUCE'},
 {'text': 'STUFFERED INTO YOU HIS BELLY COUNSELLED HIM'},
 {'text': 'AFTER EARLY NIGHTFALL THE YELLOW LAMPS WOULD LIGHT UP HERE AND THERE THE SQUALID QUARTER OF THE BROTHELS'},
 {'text': 'HO BERTIE ANY GOOD IN YOUR MIND'}]
Steven Liu's avatar
Steven Liu committed
127
```
Sylvain Gugger's avatar
Sylvain Gugger committed
128

129
130
For a larger dataset where the inputs are big (like in speech or vision), you will want to pass along a generator instead of a list that loads all the inputs in memory. See the [pipeline documentation](main_classes/pipeline) for more information.

Steven Liu's avatar
Steven Liu committed
131
### Use another model and tokenizer in the pipeline
Sylvain Gugger's avatar
Sylvain Gugger committed
132

Steven Liu's avatar
Steven Liu committed
133
The [`pipeline`] can accommodate any model from the [Model Hub](https://huggingface.co/models), making it easy to adapt the [`pipeline`] for other use-cases. For example, if you'd like a model capable of handling French text, use the tags on the Model Hub to filter for an appropriate model. The top filtered result returns a multilingual [BERT model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) fine-tuned for sentiment analysis. Great, let's use this model!
Sylvain Gugger's avatar
Sylvain Gugger committed
134

Steven Liu's avatar
Steven Liu committed
135
136
137
```py
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
```
Sylvain Gugger's avatar
Sylvain Gugger committed
138

Steven Liu's avatar
Steven Liu committed
139
Use the [`AutoModelForSequenceClassification`] and ['AutoTokenizer'] to load the pretrained model and it's associated tokenizer (more on an `AutoClass` below):
Sylvain Gugger's avatar
Sylvain Gugger committed
140
141
142

```py
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
143

Steven Liu's avatar
Steven Liu committed
144
>>> model = AutoModelForSequenceClassification.from_pretrained(model_name)
Sylvain Gugger's avatar
Sylvain Gugger committed
145
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
146
>>> # ===PT-TF-SPLIT===
Sylvain Gugger's avatar
Sylvain Gugger committed
147
>>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
148

Steven Liu's avatar
Steven Liu committed
149
>>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
Sylvain Gugger's avatar
Sylvain Gugger committed
150
151
152
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
```

Steven Liu's avatar
Steven Liu committed
153
154
155
156
157
Then you can specify the model and tokenizer in the [`pipeline`], and apply the `classifier` on your target text:

```py
>>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
>>> classifier("Nous sommes tr猫s heureux de vous pr茅senter la biblioth猫que 馃 Transformers.")
158
[{'label': '5 stars', 'score': 0.7273}]
Steven Liu's avatar
Steven Liu committed
159
160
161
162
163
164
165
166
167
168
169
```

If you can't find a model for your use-case, you will need to fine-tune a pretrained model on your data. Take a look at our [fine-tuning tutorial](./training) to learn how. Finally, after you've fine-tuned your pretrained model, please consider sharing it (see tutorial [here](./model_sharing)) with the community on the Model Hub to democratize NLP for everyone! 馃

## AutoClass

<Youtube id="AhChOFRegn4"/>

Under the hood, the [`AutoModelForSequenceClassification`] and [`AutoTokenizer`] classes work together to power the [`pipeline`]. An [AutoClass](./model_doc/auto) is a shortcut that automatically retrieves the architecture of a pretrained model from it's name or path. You only need to select the appropriate `AutoClass` for your task and it's associated tokenizer with [`AutoTokenizer`]. 

Let's return to our example and see how you can use the `AutoClass` to replicate the results of the [`pipeline`].
Sylvain Gugger's avatar
Sylvain Gugger committed
170

Steven Liu's avatar
Steven Liu committed
171
### AutoTokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
172

Steven Liu's avatar
Steven Liu committed
173
A tokenizer is responsible for preprocessing text into a format that is understandable to the model. First, the tokenizer will split the text into words called *tokens*. There are multiple rules that govern the tokenization process, including how to split a word and at what level (learn more about tokenization [here](./tokenizer_summary)). The most important thing to remember though is you need to instantiate the tokenizer with the same model name to ensure you're using the same tokenization rules a model was pretrained with.
Sylvain Gugger's avatar
Sylvain Gugger committed
174

Steven Liu's avatar
Steven Liu committed
175
Load a tokenizer with [`AutoTokenizer`]:
Sylvain Gugger's avatar
Sylvain Gugger committed
176
177

```py
Steven Liu's avatar
Steven Liu committed
178
179
180
181
>>> from transformers import AutoTokenizer

>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tokenizer = AutoTokenizer.from_pretrained(model_name)
Sylvain Gugger's avatar
Sylvain Gugger committed
182
183
```

Steven Liu's avatar
Steven Liu committed
184
Next, the tokenizer converts the tokens into numbers in order to construct a tensor as input to the model. This is known as the model's *vocabulary*.
Sylvain Gugger's avatar
Sylvain Gugger committed
185

Steven Liu's avatar
Steven Liu committed
186
Pass your text to the tokenizer:
Sylvain Gugger's avatar
Sylvain Gugger committed
187
188

```py
Steven Liu's avatar
Steven Liu committed
189
190
>>> encoding = tokenizer("We are very happy to show you the 馃 Transformers library.")
>>> print(encoding)
191
192
193
{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],
 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
Sylvain Gugger's avatar
Sylvain Gugger committed
194
195
```

Steven Liu's avatar
Steven Liu committed
196
197
198
199
200
201
The tokenizer will return a dictionary containing:

* [input_ids](./glossary#input-ids): numerical representions of your tokens.
* [atttention_mask](.glossary#attention-mask): indicates which tokens should be attended to.

Just like the [`pipeline`], the tokenizer will accept a list of inputs. In addition, the tokenizer can also pad and truncate the text to return a batch with uniform length:
Sylvain Gugger's avatar
Sylvain Gugger committed
202
203
204
205
206
207
208

```py
>>> pt_batch = tokenizer(
...     ["We are very happy to show you the 馃 Transformers library.", "We hope you don't hate it."],
...     padding=True,
...     truncation=True,
...     max_length=512,
Sylvain Gugger's avatar
Sylvain Gugger committed
209
...     return_tensors="pt",
Sylvain Gugger's avatar
Sylvain Gugger committed
210
... )
211
>>> # ===PT-TF-SPLIT===
Sylvain Gugger's avatar
Sylvain Gugger committed
212
213
214
215
216
>>> tf_batch = tokenizer(
...     ["We are very happy to show you the 馃 Transformers library.", "We hope you don't hate it."],
...     padding=True,
...     truncation=True,
...     max_length=512,
Sylvain Gugger's avatar
Sylvain Gugger committed
217
...     return_tensors="tf",
Sylvain Gugger's avatar
Sylvain Gugger committed
218
219
220
... )
```

Steven Liu's avatar
Steven Liu committed
221
Read the [preprocessing](./preprocessing) tutorial for more details about tokenization.
Sylvain Gugger's avatar
Sylvain Gugger committed
222

Steven Liu's avatar
Steven Liu committed
223
### AutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
224

Steven Liu's avatar
Steven Liu committed
225
馃 Transformers provides a simple and unified way to load pretrained instances. This means you can load an [`AutoModel`] like you would load an [`AutoTokenizer`]. The only difference is selecting the correct [`AutoModel`] for the task. Since you are doing text - or sequence - classification, load [`AutoModelForSequenceClassification`]. The TensorFlow equivalent is simply [`TFAutoModelForSequenceClassification`]:
Sylvain Gugger's avatar
Sylvain Gugger committed
226
227

```py
Steven Liu's avatar
Steven Liu committed
228
>>> from transformers import AutoModelForSequenceClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
229

Steven Liu's avatar
Steven Liu committed
230
231
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
232
>>> # ===PT-TF-SPLIT===
Steven Liu's avatar
Steven Liu committed
233
>>> from transformers import TFAutoModelForSequenceClassification
Sylvain Gugger's avatar
Sylvain Gugger committed
234

Steven Liu's avatar
Steven Liu committed
235
236
237
>>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name)
```
Sylvain Gugger's avatar
Sylvain Gugger committed
238
239
240

<Tip>

Steven Liu's avatar
Steven Liu committed
241
See the [task summary](./task_summary) for which [`AutoModel`] class to use for which task.
Sylvain Gugger's avatar
Sylvain Gugger committed
242
243
244

</Tip>

Steven Liu's avatar
Steven Liu committed
245
Now you can pass your preprocessed batch of inputs directly to the model. If you are using a PyTorch model, unpack the dictionary by adding `**`. For TensorFlow models, pass the dictionary keys directly to the tensors:
Sylvain Gugger's avatar
Sylvain Gugger committed
246
247

```py
Steven Liu's avatar
Steven Liu committed
248
>>> pt_outputs = pt_model(**pt_batch)
249
>>> # ===PT-TF-SPLIT===
Steven Liu's avatar
Steven Liu committed
250
>>> tf_outputs = tf_model(tf_batch)
Sylvain Gugger's avatar
Sylvain Gugger committed
251
252
```

Steven Liu's avatar
Steven Liu committed
253
The model outputs the final activations in the `logits` attribute. Apply the softmax function to the `logits` to retrieve the probabilities:
Sylvain Gugger's avatar
Sylvain Gugger committed
254
255

```py
Steven Liu's avatar
Steven Liu committed
256
257
258
>>> from torch import nn

>>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
Sylvain Gugger's avatar
Sylvain Gugger committed
259
>>> print(pt_predictions)
260
261
262
263
tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725],
        [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>)

>>> # ===PT-TF-SPLIT===
Steven Liu's avatar
Steven Liu committed
264
265
266
>>> import tensorflow as tf

>>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1)
Sylvain Gugger's avatar
Sylvain Gugger committed
267
268
>>> print(tf_predictions)
tf.Tensor(
269
270
[[0.00206 0.00177 0.01155 0.21209 0.77253]
 [0.20842 0.18262 0.19693 0.1755  0.23652]], shape=(2, 5), dtype=float32)
Sylvain Gugger's avatar
Sylvain Gugger committed
271
272
```

Steven Liu's avatar
Steven Liu committed
273
<Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
274

Steven Liu's avatar
Steven Liu committed
275
276
All 馃 Transformers models (PyTorch or TensorFlow) outputs the tensors *before* the final activation
function (like softmax) because the final activation function is often fused with the loss.
Sylvain Gugger's avatar
Sylvain Gugger committed
277

Steven Liu's avatar
Steven Liu committed
278
</Tip>
Sylvain Gugger's avatar
Sylvain Gugger committed
279

Steven Liu's avatar
Steven Liu committed
280
Models are a standard [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) or a [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) so you can use them in your usual training loop. However, to make things easier, 馃 Transformers provides a [`Trainer`] class for PyTorch that adds functionality for distributed training, mixed precision, and more. For TensorFlow, you can use the `fit` method from [Keras](https://keras.io/). Refer to the [training tutorial](./training) for more details.
Sylvain Gugger's avatar
Sylvain Gugger committed
281
282
283

<Tip>

Steven Liu's avatar
Steven Liu committed
284
285
馃 Transformers model outputs are special dataclasses so their attributes are autocompleted in an IDE.
The model outputs also behave like a tuple or a dictionary (e.g., you can index with an integer, a slice or a string) in which case the attributes that are `None` are ignored.
Sylvain Gugger's avatar
Sylvain Gugger committed
286
287
288

</Tip>

Steven Liu's avatar
Steven Liu committed
289
290
291
### Save a model

Once your model is fine-tuned, you can save it with its tokenizer using [`PreTrainedModel.save_pretrained`]:
Sylvain Gugger's avatar
Sylvain Gugger committed
292
293

```py
Sylvain Gugger's avatar
Sylvain Gugger committed
294
>>> pt_save_directory = "./pt_save_pretrained"
295
>>> tokenizer.save_pretrained(pt_save_directory)  # doctest: +IGNORE_RESULT
Sylvain Gugger's avatar
Sylvain Gugger committed
296
>>> pt_model.save_pretrained(pt_save_directory)
297
>>> # ===PT-TF-SPLIT===
Sylvain Gugger's avatar
Sylvain Gugger committed
298
>>> tf_save_directory = "./tf_save_pretrained"
299
>>> tokenizer.save_pretrained(tf_save_directory)  # doctest: +IGNORE_RESULT
Sylvain Gugger's avatar
Sylvain Gugger committed
300
301
302
>>> tf_model.save_pretrained(tf_save_directory)
```

Steven Liu's avatar
Steven Liu committed
303
When you are ready to use the model again, reload it with [`PreTrainedModel.from_pretrained`]:
Sylvain Gugger's avatar
Sylvain Gugger committed
304

Steven Liu's avatar
Steven Liu committed
305
306
```py
>>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained")
307
>>> # ===PT-TF-SPLIT===
Steven Liu's avatar
Steven Liu committed
308
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained")
Sylvain Gugger's avatar
Sylvain Gugger committed
309
310
```

Steven Liu's avatar
Steven Liu committed
311
One particularly cool 馃 Transformers feature is the ability to save a model and reload it as either a PyTorch or TensorFlow model. The `from_pt` or `from_tf` parameter can convert the model from one framework to the other:
Sylvain Gugger's avatar
Sylvain Gugger committed
312
313
314

```py
>>> from transformers import AutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
315

Sylvain Gugger's avatar
Sylvain Gugger committed
316
>>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory)
Steven Liu's avatar
Steven Liu committed
317
>>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True)
318
>>> # ===PT-TF-SPLIT===
319
>>> from transformers import TFAutoModel
Sylvain Gugger's avatar
Sylvain Gugger committed
320

321
>>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory)
Steven Liu's avatar
Steven Liu committed
322
>>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True)
323
```