task_summary.rst 45.9 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
Summary of the tasks
Sylvain Gugger's avatar
Sylvain Gugger committed
2
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3
4

This page shows the most frequent use-cases when using the library. The models available allow for many different
Sylvain Gugger's avatar
Sylvain Gugger committed
5
6
configurations and a great versatility in use-cases. The most simple ones are presented here, showcasing usage for
tasks such as question answering, sequence classification, named entity recognition and others.
7
8
9

These examples leverage auto-models, which are classes that will instantiate a model according to a given checkpoint,
automatically selecting the correct model architecture. Please check the :class:`~transformers.AutoModel` documentation
Sylvain Gugger's avatar
Sylvain Gugger committed
10
for more information. Feel free to modify the code to be more specific and adapt it to your specific use-case.
11
12
13
14
15
16

In order for a model to perform well on a task, it must be loaded from a checkpoint corresponding to that task. These
checkpoints are usually pre-trained on a large corpus of data and fine-tuned on a specific task. This means the
following:

- Not all models were fine-tuned on all tasks. If you want to fine-tune a model on a specific task, you can leverage
Sylvain Gugger's avatar
Sylvain Gugger committed
17
18
19
20
21
22
  one of the `run_$TASK.py` scripts in the `examples
  <https://github.com/huggingface/transformers/tree/master/examples>`__ directory.
- Fine-tuned models were fine-tuned on a specific dataset. This dataset may or may not overlap with your use-case and
  domain. As mentioned previously, you may leverage the `examples
  <https://github.com/huggingface/transformers/tree/master/examples>`__ scripts to fine-tune your model, or you may
  create your own training script.
23
24
25
26

In order to do an inference on a task, several mechanisms are made available by the library:

- Pipelines: very easy-to-use abstractions, which require as little as two lines of code.
Sylvain Gugger's avatar
Sylvain Gugger committed
27
28
- Direct model use: Less abstractions, but more flexibility and power via a direct access to a tokenizer
  (PyTorch/TensorFlow) and full inference capacity.
29
30
31
32
33
34
35
36
37
38
39
40

Both approaches are showcased here.

.. note::

    All tasks presented here leverage pre-trained checkpoints that were fine-tuned on specific tasks. Loading a
    checkpoint that was not fine-tuned on a specific task would load only the base transformer layers and not the
    additional head that is used for the task, initializing the weights of that head randomly.

    This would produce random output.

Sequence Classification
Sylvain Gugger's avatar
Sylvain Gugger committed
41
-----------------------------------------------------------------------------------------------------------------------
42

Sylvain Gugger's avatar
Sylvain Gugger committed
43
44
45
46
47
48
49
50
Sequence classification is the task of classifying sequences according to a given number of classes. An example of
sequence classification is the GLUE dataset, which is entirely based on that task. If you would like to fine-tune a
model on a GLUE sequence classification task, you may leverage the `run_glue.py
<https://github.com/huggingface/transformers/tree/master/examples/text-classification/run_glue.py>`__ and
`run_pl_glue.py
<https://github.com/huggingface/transformers/tree/master/examples/text-classification/run_pl_glue.py>`__ or
`run_tf_glue.py
<https://github.com/huggingface/transformers/tree/master/examples/text-classification/run_tf_glue.py>`__ scripts.
51

Sylvain Gugger's avatar
Sylvain Gugger committed
52
53
Here is an example of using pipelines to do sentiment analysis: identifying if a sequence is positive or negative. It
leverages a fine-tuned model on sst2, which is a GLUE task.
54

55
This returns a label ("POSITIVE" or "NEGATIVE") alongside a score, as follows:
56

57
.. code-block::
58

59
    >>> from transformers import pipeline
60

61
    >>> nlp = pipeline("sentiment-analysis")
62

63
64
65
    >>> result = nlp("I hate you")[0]
    >>> print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
    label: NEGATIVE, with score: 0.9991
66

67
68
69
    >>> result = nlp("I love you")[0]
    >>> print(f"label: {result['label']}, with score: {round(result['score'], 4)}")
    label: POSITIVE, with score: 0.9999
70
71


Sylvain Gugger's avatar
Sylvain Gugger committed
72
73
Here is an example of doing a sequence classification using a model to determine if two sequences are paraphrases of
each other. The process is the following:
74

Sylvain Gugger's avatar
Sylvain Gugger committed
75
76
77
78
79
80
81
1. Instantiate a tokenizer and a model from the checkpoint name. The model is identified as a BERT model and loads it
   with the weights stored in the checkpoint.
2. Build a sequence from the two sentences, with the correct model-specific separators token type ids and attention
   masks (:func:`~transformers.PreTrainedTokenizer.encode` and :func:`~transformers.PreTrainedTokenizer.__call__` take
   care of this).
3. Pass this sequence through the model so that it is classified in one of the two available classes: 0 (not a
   paraphrase) and 1 (is a paraphrase).
82
83
4. Compute the softmax of the result to get probabilities over the classes.
5. Print the results.
84

85
.. code-block::
86

87
88
89
    >>> ## PYTORCH CODE
    >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
    >>> import torch
90

91
    >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
92
    >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=True)
93

94
    >>> classes = ["not paraphrase", "is paraphrase"]
95

96
97
98
    >>> sequence_0 = "The company HuggingFace is based in New York City"
    >>> sequence_1 = "Apples are especially bad for your health"
    >>> sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
99

100
101
    >>> paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt")
    >>> not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt")
102

103
104
    >>> paraphrase_classification_logits = model(**paraphrase).logits
    >>> not_paraphrase_classification_logits = model(**not_paraphrase).logits
105

106
107
    >>> paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]
    >>> not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]
108

109
110
111
112
113
    >>> # Should be paraphrase
    >>> for i in range(len(classes)):
    ...     print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
    not paraphrase: 10%
    is paraphrase: 90%
114

115
116
117
118
119
120
121
122
    >>> # Should not be paraphrase
    >>> for i in range(len(classes)):
    ...     print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")
    not paraphrase: 94%
    is paraphrase: 6%
    >>> ## TENSORFLOW CODE
    >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
    >>> import tensorflow as tf
123

124
    >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
125
    >>> model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=True)
126

127
    >>> classes = ["not paraphrase", "is paraphrase"]
128

129
130
131
    >>> sequence_0 = "The company HuggingFace is based in New York City"
    >>> sequence_1 = "Apples are especially bad for your health"
    >>> sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
132

133
134
    >>> paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="tf")
    >>> not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="tf")
135

136
137
    >>> paraphrase_classification_logits = model(paraphrase)[0]
    >>> not_paraphrase_classification_logits = model(not_paraphrase)[0]
138

139
140
    >>> paraphrase_results = tf.nn.softmax(paraphrase_classification_logits, axis=1).numpy()[0]
    >>> not_paraphrase_results = tf.nn.softmax(not_paraphrase_classification_logits, axis=1).numpy()[0]
141

142
143
144
    >>> # Should be paraphrase
    >>> for i in range(len(classes)):
    ...     print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
145
146
147
    not paraphrase: 10%
    is paraphrase: 90%

148
149
150
    >>> # Should not be paraphrase
    >>> for i in range(len(classes)):
    ...     print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")
151
152
153
154
    not paraphrase: 94%
    is paraphrase: 6%

Extractive Question Answering
Sylvain Gugger's avatar
Sylvain Gugger committed
155
-----------------------------------------------------------------------------------------------------------------------
156
157

Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
Sylvain Gugger's avatar
Sylvain Gugger committed
158
159
160
161
162
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune a
model on a SQuAD task, you may leverage the `run_squad.py
<https://github.com/huggingface/transformers/tree/master/examples/question-answering/run_squad.py>`__ and
`run_tf_squad.py
<https://github.com/huggingface/transformers/tree/master/examples/question-answering/run_tf_squad.py>`__ scripts.
163

164

Sylvain Gugger's avatar
Sylvain Gugger committed
165
166
Here is an example of using pipelines to do question answering: extracting an answer from a text given a question. It
leverages a fine-tuned model on SQuAD.
167

168
.. code-block::
169

170
    >>> from transformers import pipeline
171

172
    >>> nlp = pipeline("question-answering")
173

174
175
176
177
178
    >>> context = r"""
    ... Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
    ... question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
    ... a model on a SQuAD task, you may leverage the examples/question-answering/run_squad.py script.
    ... """
179

Sylvain Gugger's avatar
Sylvain Gugger committed
180
181
This returns an answer extracted from the text, a confidence score, alongside "start" and "end" values, which are the
positions of the extracted answer in the text.
182

183
184
185
186
187
.. code-block::

    >>> result = nlp(question="What is extractive question answering?", context=context)
    >>> print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
    Answer: 'the task of extracting an answer from a text given a question.', score: 0.6226, start: 34, end: 96
188

189
190
191
    >>> result = nlp(question="What is a good example of a question answering dataset?", context=context)
    >>> print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
    Answer: 'SQuAD dataset,', score: 0.5053, start: 147, end: 161
192
193
194
195


Here is an example of question answering using a model and a tokenizer. The process is the following:

Sylvain Gugger's avatar
Sylvain Gugger committed
196
197
1. Instantiate a tokenizer and a model from the checkpoint name. The model is identified as a BERT model and loads it
   with the weights stored in the checkpoint.
198
2. Define a text and a few questions.
Sylvain Gugger's avatar
Sylvain Gugger committed
199
200
201
202
3. Iterate over the questions and build a sequence from the text and the current question, with the correct
   model-specific separators token type ids and attention masks.
4. Pass this sequence through the model. This outputs a range of scores across the entire sequence tokens (question and
   text), for both the start and end positions.
203
204
205
5. Compute the softmax of the result to get probabilities over the tokens.
6. Fetch the tokens from the identified start and stop values, convert those tokens to a string.
7. Print the results.
206

207
208
209
210
211
212
213
.. code-block::

    >>> ## PYTORCH CODE
    >>> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
    >>> import torch

    >>> tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
214
    >>> model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad", return_dict=True)
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229

    >>> text = r"""
    ... 🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
    ... architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
    ... Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
    ... TensorFlow 2.0 and PyTorch.
    ... """

    >>> questions = [
    ...     "How many pretrained models are available in 🤗 Transformers?",
    ...     "What does 🤗 Transformers provide?",
    ...     "🤗 Transformers provides interoperability between which frameworks?",
    ... ]

    >>> for question in questions:
230
    ...     inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
    ...     input_ids = inputs["input_ids"].tolist()[0]
    ...
    ...     text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
    ...     answer_start_scores, answer_end_scores = model(**inputs)
    ...
    ...     answer_start = torch.argmax(
    ...         answer_start_scores
    ...     )  # Get the most likely beginning of answer with the argmax of the score
    ...     answer_end = torch.argmax(answer_end_scores) + 1  # Get the most likely end of answer with the argmax of the score
    ...
    ...     answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
    ...
    ...     print(f"Question: {question}")
    ...     print(f"Answer: {answer}")
    Question: How many pretrained models are available in 🤗 Transformers?
    Answer: over 32 +
    Question: What does 🤗 Transformers provide?
    Answer: general - purpose architectures
    Question: 🤗 Transformers provides interoperability between which frameworks?
    Answer: tensorflow 2 . 0 and pytorch
    >>> ## TENSORFLOW CODE
    >>> from transformers import AutoTokenizer, TFAutoModelForQuestionAnswering
    >>> import tensorflow as tf

    >>> tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
256
    >>> model = TFAutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad", return_dict=True)
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271

    >>> text = r"""
    ... 🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
    ... architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
    ... Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
    ... TensorFlow 2.0 and PyTorch.
    ... """

    >>> questions = [
    ...     "How many pretrained models are available in 🤗 Transformers?",
    ...     "What does 🤗 Transformers provide?",
    ...     "🤗 Transformers provides interoperability between which frameworks?",
    ... ]

    >>> for question in questions:
272
    ...     inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="tf")
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
    ...     input_ids = inputs["input_ids"].numpy()[0]
    ...
    ...     text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
    ...     answer_start_scores, answer_end_scores = model(inputs)
    ...
    ...     answer_start = tf.argmax(
    ...         answer_start_scores, axis=1
    ...     ).numpy()[0]  # Get the most likely beginning of answer with the argmax of the score
    ...     answer_end = (
    ...         tf.argmax(answer_end_scores, axis=1) + 1
    ...     ).numpy()[0]  # Get the most likely end of answer with the argmax of the score
    ...     answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
    ...
    ...     print(f"Question: {question}")
    ...     print(f"Answer: {answer}")
Sylvain Gugger's avatar
Sylvain Gugger committed
288
    Question: How many pretrained models are available in 🤗 Transformers?
289
    Answer: over 32 +
Sylvain Gugger's avatar
Sylvain Gugger committed
290
    Question: What does 🤗 Transformers provide?
291
    Answer: general - purpose architectures
Sylvain Gugger's avatar
Sylvain Gugger committed
292
    Question: 🤗 Transformers provides interoperability between which frameworks?
293
294
295
296
297
    Answer: tensorflow 2 . 0 and pytorch



Language Modeling
Sylvain Gugger's avatar
Sylvain Gugger committed
298
-----------------------------------------------------------------------------------------------------------------------
299

Sylvain Gugger's avatar
Sylvain Gugger committed
300
301
302
Language modeling is the task of fitting a model to a corpus, which can be domain specific. All popular
transformer-based models are trained using a variant of language modeling, e.g. BERT with masked language modeling,
GPT-2 with causal language modeling.
303
304

Language modeling can be useful outside of pre-training as well, for example to shift the model distribution to be
Sylvain Gugger's avatar
Sylvain Gugger committed
305
306
domain-specific: using a language model trained over a very large corpus, and then fine-tuning it to a news dataset or
on scientific papers e.g. `LysandreJik/arxiv-nlp <https://huggingface.co/lysandre/arxiv-nlp>`__.
307
308

Masked Language Modeling
Sylvain Gugger's avatar
Sylvain Gugger committed
309
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
310
311
312

Masked language modeling is the task of masking tokens in a sequence with a masking token, and prompting the model to
fill that mask with an appropriate token. This allows the model to attend to both the right context (tokens on the
Sylvain Gugger's avatar
Sylvain Gugger committed
313
314
315
right of the mask) and the left context (tokens on the left of the mask). Such a training creates a strong basis for
downstream tasks, requiring bi-directional context such as SQuAD (question answering, see `Lewis, Lui, Goyal et al.
<https://arxiv.org/abs/1910.13461>`__, part 4.2).
316
317
318

Here is an example of using pipelines to replace a mask from a sequence:

319
.. code-block::
320

321
    >>> from transformers import pipeline
322

323
    >>> nlp = pipeline("fill-mask")
324

Sylvain Gugger's avatar
Sylvain Gugger committed
325
This outputs the sequences with the mask filled, the confidence score, and the token id in the tokenizer vocabulary:
326

327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
.. code-block::

    >>> from pprint import pprint
    >>> pprint(nlp(f"HuggingFace is creating a {nlp.tokenizer.mask_token} that the community uses to solve NLP tasks."))
    [{'score': 0.1792745739221573,
      'sequence': '<s>HuggingFace is creating a tool that the community uses to '
                  'solve NLP tasks.</s>',
      'token': 3944,
      'token_str': 'Ġtool'},
     {'score': 0.11349421739578247,
      'sequence': '<s>HuggingFace is creating a framework that the community uses '
                  'to solve NLP tasks.</s>',
      'token': 7208,
      'token_str': 'Ġframework'},
     {'score': 0.05243554711341858,
      'sequence': '<s>HuggingFace is creating a library that the community uses to '
                  'solve NLP tasks.</s>',
      'token': 5560,
      'token_str': 'Ġlibrary'},
     {'score': 0.03493533283472061,
      'sequence': '<s>HuggingFace is creating a database that the community uses '
                  'to solve NLP tasks.</s>',
      'token': 8503,
      'token_str': 'Ġdatabase'},
     {'score': 0.02860250137746334,
      'sequence': '<s>HuggingFace is creating a prototype that the community uses '
                  'to solve NLP tasks.</s>',
      'token': 17715,
      'token_str': 'Ġprototype'}]
356

357
Here is an example of doing masked language modeling using a model and a tokenizer. The process is the following:
358

Sylvain Gugger's avatar
Sylvain Gugger committed
359
360
1. Instantiate a tokenizer and a model from the checkpoint name. The model is identified as a DistilBERT model and
   loads it with the weights stored in the checkpoint.
361
362
2. Define a sequence with a masked token, placing the :obj:`tokenizer.mask_token` instead of a word.
3. Encode that sequence into a list of IDs and find the position of the masked token in that list.
Sylvain Gugger's avatar
Sylvain Gugger committed
363
364
4. Retrieve the predictions at the index of the mask token: this tensor has the same size as the vocabulary, and the
   values are the scores attributed to each token. The model gives higher score to tokens it deems probable in that
365
366
367
   context.
5. Retrieve the top 5 tokens using the PyTorch :obj:`topk` or TensorFlow :obj:`top_k` methods.
6. Replace the mask token by the tokens and print the results
368

369
.. code-block::
370

371
372
373
    >>> ## PYTORCH CODE
    >>> from transformers import AutoModelWithLMHead, AutoTokenizer
    >>> import torch
374

375
    >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
376
    >>> model = AutoModelWithLMHead.from_pretrained("distilbert-base-cased", return_dict=True)
377

378
    >>> sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."
379

380
381
    >>> input = tokenizer.encode(sequence, return_tensors="pt")
    >>> mask_token_index = torch.where(input == tokenizer.mask_token_id)[1]
382

383
    >>> token_logits = model(input).logits
384
    >>> mask_token_logits = token_logits[0, mask_token_index, :]
385

386
387
388
389
    >>> top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist()
    >>> ## TENSORFLOW CODE
    >>> from transformers import TFAutoModelWithLMHead, AutoTokenizer
    >>> import tensorflow as tf
390

391
    >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
392
    >>> model = TFAutoModelWithLMHead.from_pretrained("distilbert-base-cased", return_dict=True)
393

394
    >>> sequence = f"Distilled models are smaller than the models they mimic. Using them instead of the large versions would help {tokenizer.mask_token} our carbon footprint."
395

396
397
    >>> input = tokenizer.encode(sequence, return_tensors="tf")
    >>> mask_token_index = tf.where(input == tokenizer.mask_token_id)[0, 1]
398

399
400
    >>> token_logits = model(input)[0]
    >>> mask_token_logits = token_logits[0, mask_token_index, :]
401

402
    >>> top_5_tokens = tf.math.top_k(mask_token_logits, 5).indices.numpy()
403
404
405
406


This prints five sequences, with the top 5 tokens predicted by the model:

407
.. code-block::
408

409
410
    >>> for token in top_5_tokens:
    ...     print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))
411
412
413
414
415
416
417
418
    Distilled models are smaller than the models they mimic. Using them instead of the large versions would help reduce our carbon footprint.
    Distilled models are smaller than the models they mimic. Using them instead of the large versions would help increase our carbon footprint.
    Distilled models are smaller than the models they mimic. Using them instead of the large versions would help decrease our carbon footprint.
    Distilled models are smaller than the models they mimic. Using them instead of the large versions would help offset our carbon footprint.
    Distilled models are smaller than the models they mimic. Using them instead of the large versions would help improve our carbon footprint.


Causal Language Modeling
Sylvain Gugger's avatar
Sylvain Gugger committed
419
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
420
421
422
423
424

Causal language modeling is the task of predicting the token following a sequence of tokens. In this situation, the
model only attends to the left context (tokens on the left of the mask). Such a training is particularly interesting
for generation tasks.

Sylvain Gugger's avatar
Sylvain Gugger committed
425
426
Usually, the next token is predicted by sampling from the logits of the last hidden state the model produces from the
input sequence.
427

Sylvain Gugger's avatar
Sylvain Gugger committed
428
429
430
Here is an example of using the tokenizer and model and leveraging the
:func:`~transformers.PreTrainedModel.top_k_top_p_filtering` method to sample the next token following an input sequence
of tokens.
431

432
.. code-block::
433

434
435
436
437
    >>> ## PYTORCH CODE
    >>> from transformers import AutoModelWithLMHead, AutoTokenizer, top_k_top_p_filtering
    >>> import torch
    >>> from torch.nn import functional as F
438

439
    >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
440
    >>> model = AutoModelWithLMHead.from_pretrained("gpt2", return_dict=True)
441

442
    >>> sequence = f"Hugging Face is based in DUMBO, New York City, and "
443

444
    >>> input_ids = tokenizer.encode(sequence, return_tensors="pt")
445

446
    >>> # get logits of last hidden state
447
    >>> next_token_logits = model(input_ids).logits[:, -1, :]
448

449
450
    >>> # filter
    >>> filtered_next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=50, top_p=1.0)
451

452
453
454
    >>> # sample
    >>> probs = F.softmax(filtered_next_token_logits, dim=-1)
    >>> next_token = torch.multinomial(probs, num_samples=1)
455

456
    >>> generated = torch.cat([input_ids, next_token], dim=-1)
457

458
459
460
461
    >>> resulting_string = tokenizer.decode(generated.tolist()[0])
    >>> ## TENSORFLOW CODE
    >>> from transformers import TFAutoModelWithLMHead, AutoTokenizer, tf_top_k_top_p_filtering
    >>> import tensorflow as tf
462

463
    >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
464
    >>> model = TFAutoModelWithLMHead.from_pretrained("gpt2", return_dict=True)
465

466
    >>> sequence = f"Hugging Face is based in DUMBO, New York City, and "
467

468
    >>> input_ids = tokenizer.encode(sequence, return_tensors="tf")
469

470
471
    >>> # get logits of last hidden state
    >>> next_token_logits = model(input_ids)[0][:, -1, :]
472

473
474
    >>> # filter
    >>> filtered_next_token_logits = tf_top_k_top_p_filtering(next_token_logits, top_k=50, top_p=1.0)
475

476
477
    >>> # sample
    >>> next_token = tf.random.categorical(filtered_next_token_logits, dtype=tf.int32, num_samples=1)
478

479
    >>> generated = tf.concat([input_ids, next_token], axis=1)
480

481
    >>> resulting_string = tokenizer.decode(generated.numpy().tolist()[0])
482
483


484
This outputs a (hopefully) coherent next token following the original sequence, which in our case is the word *has*:
485

486
.. code-block::
487

Sylvain Gugger's avatar
Sylvain Gugger committed
488
    >>> print(resulting_string)
489
490
    Hugging Face is based in DUMBO, New York City, and has

Sylvain Gugger's avatar
Sylvain Gugger committed
491
492
In the next section, we show how this functionality is leveraged in :func:`~transformers.PreTrainedModel.generate` to
generate multiple tokens up to a user-defined length.
493
494

Text Generation
Sylvain Gugger's avatar
Sylvain Gugger committed
495
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
496

Sylvain Gugger's avatar
Sylvain Gugger committed
497
498
499
500
In text generation (*a.k.a* *open-ended text generation*) the goal is to create a coherent portion of text that is a
continuation from the given context. The following example shows how *GPT-2* can be used in pipelines to generate text.
As a default all models apply *Top-K* sampling when used in pipelines, as configured in their respective configurations
(see `gpt-2 config <https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json>`__ for example).
501

502
503
504
.. code-block::

    >>> from transformers import pipeline
505

506
507
508
    >>> text_generator = pipeline("text-generation")
    >>> print(text_generator("As far as I am concerned, I will", max_length=50, do_sample=False))
    [{'generated_text': 'As far as I am concerned, I will be the first to admit that I am not a fan of the idea of a "free market." I think that the idea of a free market is a bit of a stretch. I think that the idea'}]
509
510
511



Sylvain Gugger's avatar
Sylvain Gugger committed
512
Here, the model generates a random text with a total maximal length of *50* tokens from context *"As far as I am
513
514
concerned, I will"*. The default arguments of ``PreTrainedModel.generate()`` can be directly overridden in the
pipeline, as is shown above for the argument ``max_length``.
515

516
Here is an example of text generation using ``XLNet`` and its tokenizer.
517

518
.. code-block::
519

520
521
    >>> ## PYTORCH CODE
    >>> from transformers import AutoModelWithLMHead, AutoTokenizer
522

523
    >>> model = AutoModelWithLMHead.from_pretrained("xlnet-base-cased", return_dict=True)
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
    >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")

    >>> # Padding text helps XLNet with short prompts - proposed by Aman Rusia in https://github.com/rusiaaman/XLNet-gen#methodology
    >>> PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family
    ... (except for Alexei and Maria) are discovered.
    ... The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
    ... remainder of the story. 1883 Western Siberia,
    ... a young Grigori Rasputin is asked by his father and a group of men to perform magic.
    ... Rasputin has a vision and denounces one of the men as a horse thief. Although his
    ... father initially slaps him for making such an accusation, Rasputin watches as the
    ... man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
    ... the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
    ... with people, even a bishop, begging for his blessing. <eod> </s> <eos>"""

    >>> prompt = "Today the weather is really nice and I am planning on "
    >>> inputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors="pt")

    >>> prompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
    >>> outputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)
    >>> generated = prompt + tokenizer.decode(outputs[0])[prompt_length:]

    >>> ## TENSORFLOW CODE
    >>> from transformers import TFAutoModelWithLMHead, AutoTokenizer

548
    >>> model = TFAutoModelWithLMHead.from_pretrained("xlnet-base-cased", return_dict=True)
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
    >>> tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")

    >>> # Padding text helps XLNet with short prompts - proposed by Aman Rusia in https://github.com/rusiaaman/XLNet-gen#methodology
    >>> PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family
    ... (except for Alexei and Maria) are discovered.
    ... The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
    ... remainder of the story. 1883 Western Siberia,
    ... a young Grigori Rasputin is asked by his father and a group of men to perform magic.
    ... Rasputin has a vision and denounces one of the men as a horse thief. Although his
    ... father initially slaps him for making such an accusation, Rasputin watches as the
    ... man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
    ... the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
    ... with people, even a bishop, begging for his blessing. <eod> </s> <eos>"""

    >>> prompt = "Today the weather is really nice and I am planning on "
    >>> inputs = tokenizer.encode(PADDING_TEXT + prompt, add_special_tokens=False, return_tensors="tf")

    >>> prompt_length = len(tokenizer.decode(inputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
    >>> outputs = model.generate(inputs, max_length=250, do_sample=True, top_p=0.95, top_k=60)
    >>> generated = prompt + tokenizer.decode(outputs[0])[prompt_length:]

.. code-block::
571

572
573
    >>> print(generated)
    Today the weather is really nice and I am planning on anning on taking a nice...... of a great time!<eop>...............
574

Sylvain Gugger's avatar
Sylvain Gugger committed
575
576
577
578
Text generation is currently possible with *GPT-2*, *OpenAi-GPT*, *CTRL*, *XLNet*, *Transfo-XL* and *Reformer* in
PyTorch and for most models in Tensorflow as well. As can be seen in the example above *XLNet* and *Transfo-XL* often
need to be padded to work well. GPT-2 is usually a good choice for *open-ended text generation* because it was trained
on millions of webpages with a causal language modeling objective.
579

Sylvain Gugger's avatar
Sylvain Gugger committed
580
581
For more information on how to apply different decoding strategies for text generation, please also refer to our text
generation blog post `here <https://huggingface.co/blog/how-to-generate>`__.
582
583
584


Named Entity Recognition
Sylvain Gugger's avatar
Sylvain Gugger committed
585
-----------------------------------------------------------------------------------------------------------------------
586

Sylvain Gugger's avatar
Sylvain Gugger committed
587
588
589
590
591
592
593
594
595
Named Entity Recognition (NER) is the task of classifying tokens according to a class, for example, identifying a token
as a person, an organisation or a location. An example of a named entity recognition dataset is the CoNLL-2003 dataset,
which is entirely based on that task. If you would like to fine-tune a model on an NER task, you may leverage the
`run_ner.py <https://github.com/huggingface/transformers/tree/master/examples/token-classification/run_ner.py>`__
(PyTorch), `run_pl_ner.py
<https://github.com/huggingface/transformers/tree/master/examples/token-classification/run_pl_ner.py>`__ (leveraging
pytorch-lightning) or the `run_tf_ner.py
<https://github.com/huggingface/transformers/tree/master/examples/token-classification/run_tf_ner.py>`__ (TensorFlow)
scripts.
596

Sylvain Gugger's avatar
Sylvain Gugger committed
597
598
Here is an example of using pipelines to do named entity recognition, specifically, trying to identify tokens as
belonging to one of 9 classes:
599
600
601
602
603
604
605
606
607
608
609

- O, Outside of a named entity
- B-MIS, Beginning of a miscellaneous entity right after another miscellaneous entity
- I-MIS, Miscellaneous entity
- B-PER, Beginning of a person's name right after another person's name
- I-PER, Person's name
- B-ORG, Beginning of an organisation right after another organisation
- I-ORG, Organisation
- B-LOC, Beginning of a location right after another location
- I-LOC, Location

Sylvain Gugger's avatar
Sylvain Gugger committed
610
611
It leverages a fine-tuned model on CoNLL-2003, fine-tuned by `@stefan-it <https://github.com/stefan-it>`__ from `dbmdz
<https://github.com/dbmdz>`__.
612

613
.. code-block::
614

615
    >>> from transformers import pipeline
616

617
    >>> nlp = pipeline("ner")
618

619
620
    >>> sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very"
    ...            "close to the Manhattan Bridge which is visible from the window."
621
622


Sylvain Gugger's avatar
Sylvain Gugger committed
623
624
This outputs a list of all words that have been identified as one of the entities from the 9 classes defined above.
Here are the expected results:
625

626
627
.. code-block::

Sylvain Gugger's avatar
Sylvain Gugger committed
628
    >>> print(nlp(sequence))
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
    [
        {'word': 'Hu', 'score': 0.9995632767677307, 'entity': 'I-ORG'},
        {'word': '##gging', 'score': 0.9915938973426819, 'entity': 'I-ORG'},
        {'word': 'Face', 'score': 0.9982671737670898, 'entity': 'I-ORG'},
        {'word': 'Inc', 'score': 0.9994403719902039, 'entity': 'I-ORG'},
        {'word': 'New', 'score': 0.9994346499443054, 'entity': 'I-LOC'},
        {'word': 'York', 'score': 0.9993270635604858, 'entity': 'I-LOC'},
        {'word': 'City', 'score': 0.9993864893913269, 'entity': 'I-LOC'},
        {'word': 'D', 'score': 0.9825621843338013, 'entity': 'I-LOC'},
        {'word': '##UM', 'score': 0.936983048915863, 'entity': 'I-LOC'},
        {'word': '##BO', 'score': 0.8987102508544922, 'entity': 'I-LOC'},
        {'word': 'Manhattan', 'score': 0.9758241176605225, 'entity': 'I-LOC'},
        {'word': 'Bridge', 'score': 0.990249514579773, 'entity': 'I-LOC'}
    ]

Sylvain Gugger's avatar
Sylvain Gugger committed
644
645
Note, how the tokens of the sequence "Hugging Face" have been identified as an organisation, and "New York City",
"DUMBO" and "Manhattan Bridge" have been identified as locations.
646

647
648
Here is an example of doing named entity recognition, using a model and a tokenizer. The process is the following:

Sylvain Gugger's avatar
Sylvain Gugger committed
649
650
1. Instantiate a tokenizer and a model from the checkpoint name. The model is identified as a BERT model and loads it
   with the weights stored in the checkpoint.
651
652
2. Define the label list with which the model was trained on.
3. Define a sequence with known entities, such as "Hugging Face" as an organisation and "New York City" as a location.
Sylvain Gugger's avatar
Sylvain Gugger committed
653
654
4. Split words into tokens so that they can be mapped to predictions. We use a small hack by, first, completely
   encoding and decoding the sequence, so that we're left with a string that contains the special tokens.
655
5. Encode that sequence into IDs (special tokens are added automatically).
Sylvain Gugger's avatar
Sylvain Gugger committed
656
657
658
6. Retrieve the predictions by passing the input to the model and getting the first output. This results in a
   distribution over the 9 possible classes for each token. We take the argmax to retrieve the most likely class for
   each token.
659
7. Zip together each token with its prediction and print it.
660

661
662
663
664
665
666
.. code-block::

    >>> ## PYTORCH CODE
    >>> from transformers import AutoModelForTokenClassification, AutoTokenizer
    >>> import torch

667
    >>> model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english", return_dict=True)
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
    >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")

    >>> label_list = [
    ...     "O",       # Outside of a named entity
    ...     "B-MISC",  # Beginning of a miscellaneous entity right after another miscellaneous entity
    ...     "I-MISC",  # Miscellaneous entity
    ...     "B-PER",   # Beginning of a person's name right after another person's name
    ...     "I-PER",   # Person's name
    ...     "B-ORG",   # Beginning of an organisation right after another organisation
    ...     "I-ORG",   # Organisation
    ...     "B-LOC",   # Beginning of a location right after another location
    ...     "I-LOC"    # Location
    ... ]

    >>> sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
    ...            "close to the Manhattan Bridge."

    >>> # Bit of a hack to get the tokens with the special tokens
    >>> tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
    >>> inputs = tokenizer.encode(sequence, return_tensors="pt")

689
    >>> outputs = model(inputs).logits
690
691
692
693
694
    >>> predictions = torch.argmax(outputs, dim=2)
    >>> ## TENSORFLOW CODE
    >>> from transformers import TFAutoModelForTokenClassification, AutoTokenizer
    >>> import tensorflow as tf

695
    >>> model = TFAutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english", return_dict=True)
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
    >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")

    >>> label_list = [
    ...     "O",       # Outside of a named entity
    ...     "B-MISC",  # Beginning of a miscellaneous entity right after another miscellaneous entity
    ...     "I-MISC",  # Miscellaneous entity
    ...     "B-PER",   # Beginning of a person's name right after another person's name
    ...     "I-PER",   # Person's name
    ...     "B-ORG",   # Beginning of an organisation right after another organisation
    ...     "I-ORG",   # Organisation
    ...     "B-LOC",   # Beginning of a location right after another location
    ...     "I-LOC"    # Location
    ... ]

    >>> sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \
    ...            "close to the Manhattan Bridge."

    >>> # Bit of a hack to get the tokens with the special tokens
    >>> tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
    >>> inputs = tokenizer.encode(sequence, return_tensors="tf")

    >>> outputs = model(inputs)[0]
    >>> predictions = tf.argmax(outputs, axis=2)
719
720


Sylvain Gugger's avatar
Sylvain Gugger committed
721
722
723
This outputs a list of each token mapped to its corresponding prediction. Differently from the pipeline, here every
token has a prediction as we didn't remove the "0"th class, which means that no particular entity was found on that
token. The following array should be the output:
724

725
.. code-block::
726

727
    >>> print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].numpy())])
728
729
    [('[CLS]', 'O'), ('Hu', 'I-ORG'), ('##gging', 'I-ORG'), ('Face', 'I-ORG'), ('Inc', 'I-ORG'), ('.', 'O'), ('is', 'O'), ('a', 'O'), ('company', 'O'), ('based', 'O'), ('in', 'O'), ('New', 'I-LOC'), ('York', 'I-LOC'), ('City', 'I-LOC'), ('.', 'O'), ('Its', 'O'), ('headquarters', 'O'), ('are', 'O'), ('in', 'O'), ('D', 'I-LOC'), ('##UM', 'I-LOC'), ('##BO', 'I-LOC'), (',', 'O'), ('therefore', 'O'), ('very', 'O'), ('##c', 'O'), ('##lose', 'O'), ('to', 'O'), ('the', 'O'), ('Manhattan', 'I-LOC'), ('Bridge', 'I-LOC'), ('.', 'O'), ('[SEP]', 'O')]

730
Summarization
Sylvain Gugger's avatar
Sylvain Gugger committed
731
-----------------------------------------------------------------------------------------------------------------------
732

733
Summarization is the task of summarizing a document or an article into a shorter text.
734

Sylvain Gugger's avatar
Sylvain Gugger committed
735
736
737
738
An example of a summarization dataset is the CNN / Daily Mail dataset, which consists of long news articles and was
created for the task of summarization. If you would like to fine-tune a model on a summarization task, various
approaches are described in this `document
<https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md>`__.
739

Sylvain Gugger's avatar
Sylvain Gugger committed
740
741
Here is an example of using the pipelines to do summarization. It leverages a Bart model that was fine-tuned on the CNN
/ Daily Mail data set.
742

743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
.. code-block::

    >>> from transformers import pipeline

    >>> summarizer = pipeline("summarization")

    >>> ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
    ... A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
    ... Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
    ... In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
    ... Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
    ... 2010 marriage license application, according to court documents.
    ... Prosecutors said the marriages were part of an immigration scam.
    ... On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
    ... After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
    ... Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
    ... All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
    ... Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
    ... Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
    ... The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
    ... Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
    ... Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
    ... If convicted, Barrientos faces up to four years in prison.  Her next court appearance is scheduled for May 18.
    ... """
767

Sylvain Gugger's avatar
Sylvain Gugger committed
768
769
770
Because the summarization pipeline depends on the ``PreTrainedModel.generate()`` method, we can override the default
arguments of ``PreTrainedModel.generate()`` directly in the pipeline for ``max_length`` and ``min_length`` as shown
below. This outputs the following summary:
771

772
773
774
775
.. code-block::

    >>> print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))
    [{'summary_text': 'Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men.'}]
776

777
Here is an example of doing summarization using a model and a tokenizer. The process is the following:
778

Sylvain Gugger's avatar
Sylvain Gugger committed
779
780
1. Instantiate a tokenizer and a model from the checkpoint name. Summarization is usually done using an encoder-decoder
   model, such as ``Bart`` or ``T5``.
781
782
2. Define the article that should be summarized.
3. Add the T5 specific prefix "summarize: ".
783
4. Use the ``PreTrainedModel.generate()`` method to generate the summary.
784

Sylvain Gugger's avatar
Sylvain Gugger committed
785
786
In this example we use Google`s T5 model. Even though it was pre-trained only on a multi-task mixed dataset (including
CNN / Daily Mail), it yields very good results.
787

788
.. code-block::
789

790
791
    >>> ## PYTORCH CODE
    >>> from transformers import AutoModelWithLMHead, AutoTokenizer
792

793
    >>> model = AutoModelWithLMHead.from_pretrained("t5-base", return_dict=True)
794
    >>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
795

796
797
798
799
800
    >>> # T5 uses a max_length of 512 so we cut the article to 512 tokens.
    >>> inputs = tokenizer.encode("summarize: " + ARTICLE, return_tensors="pt", max_length=512)
    >>> outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)
    >>> ## TENSORFLOW CODE
    >>> from transformers import TFAutoModelWithLMHead, AutoTokenizer
801

802
    >>> model = TFAutoModelWithLMHead.from_pretrained("t5-base", return_dict=True)
803
    >>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
804

805
806
807
    >>> # T5 uses a max_length of 512 so we cut the article to 512 tokens.
    >>> inputs = tokenizer.encode("summarize: " + ARTICLE, return_tensors="tf", max_length=512)
    >>> outputs = model.generate(inputs, max_length=150, min_length=40, length_penalty=2.0, num_beams=4, early_stopping=True)
808

809
Translation
Sylvain Gugger's avatar
Sylvain Gugger committed
810
-----------------------------------------------------------------------------------------------------------------------
811
812
813

Translation is the task of translating a text from one language to another.

Sylvain Gugger's avatar
Sylvain Gugger committed
814
815
816
817
An example of a translation dataset is the WMT English to German dataset, which has sentences in English as the input
data and the corresponding sentences in German as the target data. If you would like to fine-tune a model on a
translation task, various approaches are described in this `document
<https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md>`__.
818

Sylvain Gugger's avatar
Sylvain Gugger committed
819
820
Here is an example of using the pipelines to do translation. It leverages a T5 model that was only pre-trained on a
multi-task mixture dataset (including WMT), yet, yielding impressive translation results.
821

822
.. code-block::
823

824
    >>> from transformers import pipeline
825

826
827
828
    >>> translator = pipeline("translation_en_to_de")
    >>> print(translator("Hugging Face is a technology company based in New York and Paris", max_length=40))
    [{'translation_text': 'Hugging Face ist ein Technologieunternehmen mit Sitz in New York und Paris.'}]
829

Sylvain Gugger's avatar
Sylvain Gugger committed
830
831
Because the translation pipeline depends on the ``PreTrainedModel.generate()`` method, we can override the default
arguments of ``PreTrainedModel.generate()`` directly in the pipeline as is shown for ``max_length`` above.
832

833
834
Here is an example of doing translation using a model and a tokenizer. The process is the following:

Sylvain Gugger's avatar
Sylvain Gugger committed
835
836
1. Instantiate a tokenizer and a model from the checkpoint name. Summarization is usually done using an encoder-decoder
   model, such as ``Bart`` or ``T5``.
837
2. Define the article that should be summarized.
838
3. Add the T5 specific prefix "translate English to German: "
839
4. Use the ``PreTrainedModel.generate()`` method to perform the translation.
840

841
.. code-block::
842

843
844
    >>> ## PYTORCH CODE
    >>> from transformers import AutoModelWithLMHead, AutoTokenizer
845

846
    >>> model = AutoModelWithLMHead.from_pretrained("t5-base", return_dict=True)
847
    >>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
848

849
850
851
852
    >>> inputs = tokenizer.encode("translate English to German: Hugging Face is a technology company based in New York and Paris", return_tensors="pt")
    >>> outputs = model.generate(inputs, max_length=40, num_beams=4, early_stopping=True)
    >>> ## TENSORFLOW CODE
    >>> from transformers import TFAutoModelWithLMHead, AutoTokenizer
853

854
    >>> model = TFAutoModelWithLMHead.from_pretrained("t5-base", return_dict=True)
855
    >>> tokenizer = AutoTokenizer.from_pretrained("t5-base")
856

857
858
    >>> inputs = tokenizer.encode("translate English to German: Hugging Face is a technology company based in New York and Paris", return_tensors="tf")
    >>> outputs = model.generate(inputs, max_length=40, num_beams=4, early_stopping=True)
859

860
861
862
863
864
865
As with the pipeline example, we get the same translation:

.. code-block::

    >>> print(tokenizer.decode(outputs[0]))
    Hugging Face ist ein Technologieunternehmen mit Sitz in New York und Paris.