README.md 7.85 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

Sylvain Gugger's avatar
Sylvain Gugger committed
17
## Translation
18

Sylvain Gugger's avatar
Sylvain Gugger committed
19
This directory contains examples for finetuning and evaluating transformers on translation tasks.
20
Please tag @patil-suraj with any issues/unexpected behaviors, or send a PR!
21
For deprecated `bertabs` instructions, see [`bertabs/README.md`](https://github.com/huggingface/transformers/blob/master/examples/research_projects/bertabs/README.md).
22
For the old `finetune_trainer.py` and related utils, see [`examples/legacy/seq2seq`](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq).
23

24
25
### Supported Architectures

26
- `BartForConditionalGeneration`
27
28
- `FSMTForConditionalGeneration` (translation only)
- `MBartForConditionalGeneration`
29
30
31
- `MarianMTModel`
- `PegasusForConditionalGeneration`
- `T5ForConditionalGeneration`
32
- `MT5ForConditionalGeneration`
33

Sylvain Gugger's avatar
Sylvain Gugger committed
34
`run_translation.py` is a lightweight examples of how to download and preprocess a dataset from the [馃 Datasets](https://github.com/huggingface/datasets) library or use your own files (jsonlines or csv), then fine-tune one of the architectures above on it.
35
36

For custom datasets in `jsonlines` format please see: https://huggingface.co/docs/datasets/loading_datasets.html#json-files
37
38
and you also will find examples of these below.

39

Sylvain Gugger's avatar
Sylvain Gugger committed
40
## With Trainer
41

42
43
44
Here is an example of a translation fine-tuning with a MarianMT model:

```bash
45
python examples/pytorch/translation/run_translation.py \
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
    --model_name_or_path Helsinki-NLP/opus-mt-en-ro \
    --do_train \
    --do_eval \
    --source_lang en \
    --target_lang ro \
    --dataset_name wmt16 \
    --dataset_config_name ro-en \
    --output_dir /tmp/tst-translation \
    --per_device_train_batch_size=4 \
    --per_device_eval_batch_size=4 \
    --overwrite_output_dir \
    --predict_with_generate
```

MBart and some T5 models require special handling.

T5 models `t5-small`, `t5-base`, `t5-large`, `t5-3b` and `t5-11b` must use an additional argument: `--source_prefix "translate {source_lang} to {target_lang}"`. For example:
63

64
```bash
65
python examples/pytorch/translation/run_translation.py \
66
    --model_name_or_path t5-small \
67
68
    --do_train \
    --do_eval \
69
70
    --source_lang en \
    --target_lang ro \
71
    --source_prefix "translate English to Romanian: " \
72
73
    --dataset_name wmt16 \
    --dataset_config_name ro-en \
74
    --output_dir /tmp/tst-translation \
75
76
77
    --per_device_train_batch_size=4 \
    --per_device_eval_batch_size=4 \
    --overwrite_output_dir \
78
    --predict_with_generate
79
80
```

81
82
83
84
85
If you get a terrible BLEU score, make sure that you didn't forget to use the `--source_prefix` argument.

For the aforementioned group of T5 models it's important to remember that if you switch to a different language pair, make sure to adjust the source and target values in all 3 language-specific command line argument: `--source_lang`, `--target_lang` and `--source_prefix`.

MBart models require a different format for `--source_lang` and `--target_lang` values, e.g. instead of `en` it expects `en_XX`, for `ro` it expects `ro_RO`. The full MBart specification for language codes can be found [here](https://huggingface.co/facebook/mbart-large-cc25). For example:
86

87
```bash
88
python examples/pytorch/translation/run_translation.py \
89
    --model_name_or_path facebook/mbart-large-en-ro  \
90
91
92
93
    --do_train \
    --do_eval \
    --dataset_name wmt16 \
    --dataset_config_name ro-en \
94
    --source_lang en_XX \
95
96
97
98
99
    --target_lang ro_RO \
    --output_dir /tmp/tst-translation \
    --per_device_train_batch_size=4 \
    --per_device_eval_batch_size=4 \
    --overwrite_output_dir \
100
    --predict_with_generate
101
 ```
102
103
104
105
106

And here is how you would use the translation finetuning on your own files, after adjusting the
values for the arguments `--train_file`, `--validation_file` to match your setup:

```bash
107
python examples/pytorch/translation/run_translation.py \
108
109
110
    --model_name_or_path t5-small \
    --do_train \
    --do_eval \
111
112
    --source_lang en \
    --target_lang ro \
113
    --source_prefix "translate English to Romanian: " \
114
115
    --dataset_name wmt16 \
    --dataset_config_name ro-en \
116
117
    --train_file path_to_jsonlines_file \
    --validation_file path_to_jsonlines_file \
118
    --output_dir /tmp/tst-translation \
119
120
121
    --per_device_train_batch_size=4 \
    --per_device_eval_batch_size=4 \
    --overwrite_output_dir \
122
    --predict_with_generate
123
```
124
125
126
127
128
129
130
131
132

The task of translation supports only custom JSONLINES files, with each line being a dictionary with a key `"translation"` and its value another dictionary whose keys is the language pair. For example:

```json
{ "translation": { "en": "Others have dismissed him as a joke.", "ro": "Al葲ii l-au numit o glum膬." } }
{ "translation": { "en": "And some are holding out for an implosion.", "ro": "Iar al葲ii a葯teapt膬 implozia." } }
```
Here the languages are Romanian (`ro`) and English (`en`).

133
If you want to use a pre-processed dataset that leads to high BLEU scores, but for the `en-de` language pair, you can use `--dataset_name stas/wmt14-en-de-pre-processed`, as following:
134
135

```bash
136
python examples/pytorch/translation/run_translation.py \
137
138
139
    --model_name_or_path t5-small \
    --do_train \
    --do_eval \
140
141
    --source_lang en \
    --target_lang de \
142
143
    --source_prefix "translate English to German: " \
    --dataset_name stas/wmt14-en-de-pre-processed \
144
145
146
147
    --output_dir /tmp/tst-translation \
    --per_device_train_batch_size=4 \
    --per_device_eval_batch_size=4 \
    --overwrite_output_dir \
148
    --predict_with_generate
149
 ```
Sylvain Gugger's avatar
Sylvain Gugger committed
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170

## With Accelerate

Based on the script [`run_translation_no_trainer.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translationn_no_trainer.py).

Like `run_translation.py`, this script allows you to fine-tune any of the models supported on a
translation task, the main difference is that this
script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like.

It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer
or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by
the mean of the [馃 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally
after installing it:

```bash
pip install accelerate
```

then

```bash
Vipul Raheja's avatar
Vipul Raheja committed
171
python run_translation_no_trainer.py \
Sylvain Gugger's avatar
Sylvain Gugger committed
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
    --model_name_or_path Helsinki-NLP/opus-mt-en-ro \
    --source_lang en \
    --target_lang ro \
    --dataset_name wmt16 \
    --dataset_config_name ro-en \
    --output_dir ~/tmp/tst-translation
```

You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run

```bash
accelerate config
```

and reply to the questions asked. Then

```bash
accelerate test
```

Max Del's avatar
Max Del committed
192
that will check everything is ready for training. Finally, you can launch training with
Sylvain Gugger's avatar
Sylvain Gugger committed
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211

```bash
accelerate launch run_translation_no_trainer.py \
    --model_name_or_path Helsinki-NLP/opus-mt-en-ro \
    --source_lang en \
    --target_lang ro \
    --dataset_name wmt16 \
    --dataset_config_name ro-en \
    --output_dir ~/tmp/tst-translation
```

This command is the same and will work for:

- a CPU-only setup
- a setup with one GPU
- a distributed training with several GPUs (single or multi node)
- a training on TPUs

Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.