translation.md 15.5 KB
Newer Older
Steven Liu's avatar
Steven Liu committed
1
2
3
4
5
6
7
8
9
10
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

Steven Liu's avatar
Steven Liu committed
15
16
17
18
-->

# Translation

19
20
[[open-in-colab]]

Steven Liu's avatar
Steven Liu committed
21
22
<Youtube id="1JvfrvZgi6c"/>

23
Translation converts a sequence of text from one language to another. It is one of several tasks you can formulate as a sequence-to-sequence problem, a powerful framework for returning some output from an input, like translation or summarization. Translation systems are commonly used for translation between different language texts, but it can also be used for speech or some combination in between like text-to-speech or speech-to-text.
Steven Liu's avatar
Steven Liu committed
24

25
26
27
28
This guide will show you how to:

1. Finetune [T5](https://huggingface.co/t5-small) on the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset to translate English text to French.
2. Use your finetuned model for inference.
Steven Liu's avatar
Steven Liu committed
29
30

<Tip>
31
The task illustrated in this tutorial is supported by the following model architectures:
Steven Liu's avatar
Steven Liu committed
32

33
34
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->

35
[BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [UMT5](../model_doc/umt5), [XLM-ProphetNet](../model_doc/xlm-prophetnet)
36
37

<!--End of the generated tip-->
Steven Liu's avatar
Steven Liu committed
38
39
40

</Tip>

41
42
43
Before you begin, make sure you have all the necessary libraries installed:

```bash
44
pip install transformers datasets evaluate sacrebleu
45
46
47
48
49
50
51
52
53
54
```

We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:

```py
>>> from huggingface_hub import notebook_login

>>> notebook_login()
```

Steven Liu's avatar
Steven Liu committed
55
56
## Load OPUS Books dataset

57
Start by loading the English-French subset of the [OPUS Books](https://huggingface.co/datasets/opus_books) dataset from the 馃 Datasets library:
Steven Liu's avatar
Steven Liu committed
58
59
60
61
62
63
64

```py
>>> from datasets import load_dataset

>>> books = load_dataset("opus_books", "en-fr")
```

65
Split the dataset into a train and test set with the [`~datasets.Dataset.train_test_split`] method:
Steven Liu's avatar
Steven Liu committed
66
67

```py
68
>>> books = books["train"].train_test_split(test_size=0.2)
Steven Liu's avatar
Steven Liu committed
69
70
71
72
73
74
75
76
77
78
79
```

Then take a look at an example:

```py
>>> books["train"][0]
{'id': '90560',
 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.',
  'fr': 'Mais ce plateau 茅lev茅 ne mesurait que quelques toises, et bient么t nous f没mes rentr茅s dans notre 茅l茅ment.'}}
```

80
`translation`: an English and French translation of the text.
Steven Liu's avatar
Steven Liu committed
81
82
83
84
85

## Preprocess

<Youtube id="XAR8jnZZuUs"/>

86
The next step is to load a T5 tokenizer to process the English-French language pairs:
Steven Liu's avatar
Steven Liu committed
87
88
89
90

```py
>>> from transformers import AutoTokenizer

91
92
>>> checkpoint = "t5-small"
>>> tokenizer = AutoTokenizer.from_pretrained(checkpoint)
Steven Liu's avatar
Steven Liu committed
93
94
```

95
The preprocessing function you want to create needs to:
Steven Liu's avatar
Steven Liu committed
96
97

1. Prefix the input with a prompt so T5 knows this is a translation task. Some models capable of multiple NLP tasks require prompting for specific tasks.
98
2. Tokenize the input (English) and target (French) separately because you can't tokenize French text with a tokenizer pretrained on an English vocabulary.
Steven Liu's avatar
Steven Liu committed
99
100
101
102
103
104
105
106
107
108
109
3. Truncate sequences to be no longer than the maximum length set by the `max_length` parameter.

```py
>>> source_lang = "en"
>>> target_lang = "fr"
>>> prefix = "translate English to French: "


>>> def preprocess_function(examples):
...     inputs = [prefix + example[source_lang] for example in examples["translation"]]
...     targets = [example[target_lang] for example in examples["translation"]]
110
...     model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True)
Steven Liu's avatar
Steven Liu committed
111
112
113
...     return model_inputs
```

114
To apply the preprocessing function over the entire dataset, use 馃 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once:
Steven Liu's avatar
Steven Liu committed
115
116
117
118
119

```py
>>> tokenized_books = books.map(preprocess_function, batched=True)
```

120
Now create a batch of examples using [`DataCollatorForSeq2Seq`]. It's more efficient to *dynamically pad* the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
121

122
123
124
<frameworkcontent>
<pt>
```py
125
>>> from transformers import DataCollatorForSeq2Seq
126

127
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)
128
129
130
131
132
```
</pt>
<tf>

```py
133
>>> from transformers import DataCollatorForSeq2Seq
134

135
>>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf")
136
137
138
139
```
</tf>
</frameworkcontent>

140
## Evaluate
Steven Liu's avatar
Steven Liu committed
141

142
Including a metric during training is often helpful for evaluating your model's performance. You can quickly load a evaluation method with the 馃 [Evaluate](https://huggingface.co/docs/evaluate/index) library. For this task, load the [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu) metric (see the 馃 Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour) to learn more about how to load and compute a metric):
143

Steven Liu's avatar
Steven Liu committed
144
```py
145
>>> import evaluate
Steven Liu's avatar
Steven Liu committed
146

147
>>> metric = evaluate.load("sacrebleu")
Sylvain Gugger's avatar
Sylvain Gugger committed
148
```
149
150

Then create a function that passes your predictions and labels to [`~evaluate.EvaluationModule.compute`] to calculate the SacreBLEU score:
151

Sylvain Gugger's avatar
Sylvain Gugger committed
152
```py
153
>>> import numpy as np
Steven Liu's avatar
Steven Liu committed
154

155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180

>>> def postprocess_text(preds, labels):
...     preds = [pred.strip() for pred in preds]
...     labels = [[label.strip()] for label in labels]

...     return preds, labels


>>> def compute_metrics(eval_preds):
...     preds, labels = eval_preds
...     if isinstance(preds, tuple):
...         preds = preds[0]
...     decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)

...     labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
...     decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)

...     decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)

...     result = metric.compute(predictions=decoded_preds, references=decoded_labels)
...     result = {"bleu": result["score"]}

...     prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
...     result["gen_len"] = np.mean(prediction_lens)
...     result = {k: round(v, 4) for k, v in result.items()}
...     return result
Steven Liu's avatar
Steven Liu committed
181
```
182
183

Your `compute_metrics` function is ready to go now, and you'll return to it when you setup your training.
Steven Liu's avatar
Steven Liu committed
184

185
## Train
Steven Liu's avatar
Steven Liu committed
186

187
188
<frameworkcontent>
<pt>
Steven Liu's avatar
Steven Liu committed
189
190
<Tip>

191
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!
Steven Liu's avatar
Steven Liu committed
192
193

</Tip>
194

195
196
197
198
199
You're ready to start training your model now! Load T5 with [`AutoModelForSeq2SeqLM`]:

```py
>>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer

200
>>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
201
```
Steven Liu's avatar
Steven Liu committed
202
203
204

At this point, only three steps remain:

205
206
207
1. Define your training hyperparameters in [`Seq2SeqTrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). At the end of each epoch, the [`Trainer`] will evaluate the SacreBLEU metric and save the training checkpoint.
2. Pass the training arguments to [`Seq2SeqTrainer`] along with the model, dataset, tokenizer, data collator, and `compute_metrics` function.
3. Call [`~Trainer.train`] to finetune your model.
Steven Liu's avatar
Steven Liu committed
208
209
210

```py
>>> training_args = Seq2SeqTrainingArguments(
211
...     output_dir="my_awesome_opus_books_model",
Steven Liu's avatar
Steven Liu committed
212
213
214
215
216
217
...     evaluation_strategy="epoch",
...     learning_rate=2e-5,
...     per_device_train_batch_size=16,
...     per_device_eval_batch_size=16,
...     weight_decay=0.01,
...     save_total_limit=3,
218
219
...     num_train_epochs=2,
...     predict_with_generate=True,
Steven Liu's avatar
Steven Liu committed
220
...     fp16=True,
221
...     push_to_hub=True,
Steven Liu's avatar
Steven Liu committed
222
223
224
225
226
227
228
229
230
... )

>>> trainer = Seq2SeqTrainer(
...     model=model,
...     args=training_args,
...     train_dataset=tokenized_books["train"],
...     eval_dataset=tokenized_books["test"],
...     tokenizer=tokenizer,
...     data_collator=data_collator,
231
...     compute_metrics=compute_metrics,
Steven Liu's avatar
Steven Liu committed
232
233
234
... )

>>> trainer.train()
235
236
237
238
239
240
````

Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model:

```py
>>> trainer.push_to_hub()
Steven Liu's avatar
Steven Liu committed
241
```
242
243
</pt>
<tf>
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
<Tip>

If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)!

</Tip>
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:

```py
>>> from transformers import AdamWeightDecay

>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```

Then you can load T5 with [`TFAutoModelForSeq2SeqLM`]:

```py
>>> from transformers import TFAutoModelForSeq2SeqLM

262
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint)
263
264
265
```

Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]:
Steven Liu's avatar
Steven Liu committed
266
267

```py
Matt's avatar
Matt committed
268
269
>>> tf_train_set = model.prepare_tf_dataset(
...     tokenized_books["train"],
Steven Liu's avatar
Steven Liu committed
270
271
272
273
274
...     shuffle=True,
...     batch_size=16,
...     collate_fn=data_collator,
... )

Matt's avatar
Matt committed
275
276
>>> tf_test_set = model.prepare_tf_dataset(
...     tokenized_books["test"],
Steven Liu's avatar
Steven Liu committed
277
278
279
280
281
282
...     shuffle=False,
...     batch_size=16,
...     collate_fn=data_collator,
... )
```

283
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
284

285
286
```py
>>> import tensorflow as tf
287

288
>>> model.compile(optimizer=optimizer)  # No loss argument!
289
290
```

amyeroberts's avatar
amyeroberts committed
291
The last two things to setup before you start training is to compute the SacreBLEU metric from the predictions, and provide a way to push your model to the Hub. Both are done by using [Keras callbacks](../main_classes/keras_callbacks).
292

293
Pass your `compute_metrics` function to [`~transformers.KerasMetricCallback`]:
Steven Liu's avatar
Steven Liu committed
294
295

```py
296
>>> from transformers.keras_callbacks import KerasMetricCallback
Steven Liu's avatar
Steven Liu committed
297

298
>>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set)
Steven Liu's avatar
Steven Liu committed
299
300
```

301
Specify where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:
Steven Liu's avatar
Steven Liu committed
302
303

```py
304
305
306
307
308
309
310
311
312
313
314
315
>>> from transformers.keras_callbacks import PushToHubCallback

>>> push_to_hub_callback = PushToHubCallback(
...     output_dir="my_awesome_opus_books_model",
...     tokenizer=tokenizer,
... )
```

Then bundle your callbacks together:

```py
>>> callbacks = [metric_callback, push_to_hub_callback]
Steven Liu's avatar
Steven Liu committed
316
317
```

318
Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callbacks to finetune the model:
Steven Liu's avatar
Steven Liu committed
319
320

```py
321
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks)
Steven Liu's avatar
Steven Liu committed
322
```
323
324

Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
325
326
</tf>
</frameworkcontent>
Steven Liu's avatar
Steven Liu committed
327
328
329

<Tip>

330
For a more in-depth example of how to finetune a model for translation, take a look at the corresponding
331
332
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb).
Steven Liu's avatar
Steven Liu committed
333

334
</Tip>
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368

## Inference

Great, now that you've finetuned a model, you can use it for inference!

Come up with some text you'd like to translate to another language. For T5, you need to prefix your input depending on the task you're working on. For translation from English to French, you should prefix your input as shown below:

```py
>>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria."
```

The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for translation with your model, and pass your text to it:

```py
>>> from transformers import pipeline

>>> translator = pipeline("translation", model="my_awesome_opus_books_model")
>>> translator(text)
[{'translation_text': 'Legumes partagent des ressources avec des bact茅ries azotantes.'}]
```

You can also manually replicate the results of the `pipeline` if you'd like:

<frameworkcontent>
<pt>
Tokenize the text and return the `input_ids` as PyTorch tensors:

```py
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model")
>>> inputs = tokenizer(text, return_tensors="pt").input_ids
```

amyeroberts's avatar
amyeroberts committed
369
Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API.
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394

```py
>>> from transformers import AutoModelForSeq2SeqLM

>>> model = AutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model")
>>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95)
```

Decode the generated token ids back into text:

```py
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'Les lign茅es partagent des ressources avec des bact茅ries enfixant l'azote.'
```
</pt>
<tf>
Tokenize the text and return the `input_ids` as TensorFlow tensors:

```py
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model")
>>> inputs = tokenizer(text, return_tensors="tf").input_ids
```

amyeroberts's avatar
amyeroberts committed
395
Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the translation. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text Generation](../main_classes/text_generation) API.
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410

```py
>>> from transformers import TFAutoModelForSeq2SeqLM

>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model")
>>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95)
```

Decode the generated token ids back into text:

```py
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'Les lugumes partagent les ressources avec des bact茅ries fixatrices d'azote.'
```
</tf>
amyeroberts's avatar
amyeroberts committed
411
</frameworkcontent>