"include/ck/config.hpp" did not exist on "d075adf12642815a0755823b4d268766a6c2346c"
README.md 13.2 KB
Newer Older
1
# Neural Machine Translation
2

3
4
## Pre-trained models

Myle Ott's avatar
Myle Ott committed
5
Model | Description | Dataset | Download
6
---|---|---|---
Myle Ott's avatar
Myle Ott committed
7
8
9
10
11
12
13
14
15
16
`conv.wmt14.en-fr` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2) <br> newstest2012/2013: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2)
`conv.wmt14.en-de` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2)
`conv.wmt17.en-de` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2)
`transformer.wmt14.en-fr` | Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
`transformer.wmt16.en-de` | Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
`transformer.wmt18.en-de` | Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381)) <br> WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz) <br> See NOTE in the archive
`transformer.wmt19.en-de` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 English-German](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz)
`transformer.wmt19.de-en` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 German-English](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz)
`transformer.wmt19.en-ru` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 English-Russian](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz)
`transformer.wmt19.ru-en` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 Russian-English](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz)
17

Myle Ott's avatar
Myle Ott committed
18
19
## Example usage (torch.hub)

Myle Ott's avatar
Myle Ott committed
20
21
22
23
24
25
26
27
28
Interactive translation via PyTorch Hub:
```python
import torch

# List available models
torch.hub.list('pytorch/fairseq')  # [..., 'transformer.wmt16.en-de', ... ]

# Load a transformer trained on WMT'16 En-De
en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt16.en-de', tokenizer='moses', bpe='subword_nmt')
Myle Ott's avatar
Myle Ott committed
29

Myle Ott's avatar
Myle Ott committed
30
31
32
33
34
35
36
# The underlying model is available under the *models* attribute
assert isinstance(en2de.models[0], fairseq.models.transformer.TransformerModel)

# Translate a sentence
en2de.translate('Hello world!')
# 'Hallo Welt!'
```
Myle Ott's avatar
Myle Ott committed
37
38

## Example usage (CLI tools)
39
40

Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti:
Myle Ott's avatar
Myle Ott committed
41
42
43
44
45
46
47
48
49
50
```bash
mkdir -p data-bin
curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin
curl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin
fairseq-generate data-bin/wmt14.en-fr.newstest2014  \
    --path data-bin/wmt14.en-fr.fconv-py/model.pt \
    --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out
# ...
# | Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s)
# | Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)
51

Myle Ott's avatar
Myle Ott committed
52
# Compute BLEU score
Myle Ott's avatar
Myle Ott committed
53
54
55
56
grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys
grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref
fairseq-score --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref
# BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)
57
```
58

59
60
## Preprocessing

61
62
These scripts provide an example of pre-processing data for the NMT task.

63
### prepare-iwslt14.sh
64
65
66
67

Provides an example of pre-processing for IWSLT'14 German to English translation task: ["Report on the 11th IWSLT evaluation campaign" by Cettolo et al.](http://workshop2014.iwslt.org/downloads/proceeding.pdf)

Example usage:
Myle Ott's avatar
Myle Ott committed
68
69
70
71
```bash
cd examples/translation/
bash prepare-iwslt14.sh
cd ../..
72
73

# Binarize the dataset:
Myle Ott's avatar
Myle Ott committed
74
75
76
77
TEXT=examples/translation/iwslt14.tokenized.de-en
fairseq-preprocess --source-lang de --target-lang en \
    --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
    --destdir data-bin/iwslt14.tokenized.de-en
78

79
# Train the model (better for a single GPU setup):
Myle Ott's avatar
Myle Ott committed
80
81
82
83
84
85
mkdir -p checkpoints/fconv
CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt14.tokenized.de-en \
    --lr 0.25 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \
    --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
    --lr-scheduler fixed --force-anneal 200 \
    --arch fconv_iwslt_de_en --save-dir checkpoints/fconv
86
87

# Generate:
Myle Ott's avatar
Myle Ott committed
88
89
90
fairseq-generate data-bin/iwslt14.tokenized.de-en \
    --path checkpoints/fconv/checkpoint_best.pt \
    --batch-size 128 --beam 5 --remove-bpe
91
92
93

```

94
To train transformer model on IWSLT'14 German to English:
Myle Ott's avatar
Myle Ott committed
95
```bash
96
97
98
# Preparation steps are the same as for fconv model.

# Train the model (better for a single GPU setup):
Myle Ott's avatar
Myle Ott committed
99
100
101
102
103
104
105
106
mkdir -p checkpoints/transformer
CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt14.tokenized.de-en \
    -a transformer_iwslt_de_en --optimizer adam --lr 0.0005 -s de -t en \
    --label-smoothing 0.1 --dropout 0.3 --max-tokens 4000 \
    --min-lr '1e-09' --lr-scheduler inverse_sqrt --weight-decay 0.0001 \
    --criterion label_smoothed_cross_entropy --max-update 50000 \
    --warmup-updates 4000 --warmup-init-lr '1e-07' \
    --adam-betas '(0.9, 0.98)' --save-dir checkpoints/transformer
107
108

# Average 10 latest checkpoints:
Myle Ott's avatar
Myle Ott committed
109
110
python scripts/average_checkpoints.py --inputs checkpoints/transformer \
    --num-epoch-checkpoints 10 --output checkpoints/transformer/model.pt
111
112

# Generate:
Myle Ott's avatar
Myle Ott committed
113
114
115
fairseq-generate data-bin/iwslt14.tokenized.de-en \
    --path checkpoints/transformer/model.pt \
    --batch-size 128 --beam 5 --remove-bpe
116
117
```

118
### prepare-wmt14en2de.sh
119

120
121
The WMT English to German dataset can be preprocessed using the `prepare-wmt14en2de.sh` script.
By default it will produce a dataset that was modeled after ["Attention Is All You Need" (Vaswani et al., 2017)](https://arxiv.org/abs/1706.03762), but with news-commentary-v12 data from WMT'17.
122

123
To use only data available in WMT'14 or to replicate results obtained in the original ["Convolutional Sequence to Sequence Learning" (Gehring et al., 2017)](https://arxiv.org/abs/1705.03122) paper, please use the `--icml17` option.
124

Myle Ott's avatar
Myle Ott committed
125
126
```bash
bash prepare-wmt14en2de.sh --icml17
127
128
129
130
```

Example usage:

Myle Ott's avatar
Myle Ott committed
131
132
133
134
```bash
cd examples/translation/
bash prepare-wmt14en2de.sh
cd ../..
135
136

# Binarize the dataset:
Myle Ott's avatar
Myle Ott committed
137
138
139
140
TEXT=examples/translation/wmt17_en_de
fairseq-preprocess --source-lang en --target-lang de \
    --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
    --destdir data-bin/wmt17_en_de --thresholdtgt 0 --thresholdsrc 0
141
142
143

# Train the model:
# If it runs out of memory, try to set --max-tokens 1500 instead
Myle Ott's avatar
Myle Ott committed
144
145
146
147
148
149
mkdir -p checkpoints/fconv_wmt_en_de
fairseq-train data-bin/wmt17_en_de \
    --lr 0.5 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \
    --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
    --lr-scheduler fixed --force-anneal 50 \
    --arch fconv_wmt_en_de --save-dir checkpoints/fconv_wmt_en_de
150
151

# Generate:
Myle Ott's avatar
Myle Ott committed
152
153
fairseq-generate data-bin/wmt17_en_de \
    --path checkpoints/fconv_wmt_en_de/checkpoint_best.pt --beam 5 --remove-bpe
154
155
```

156
### prepare-wmt14en2fr.sh
157

Sergey Edunov's avatar
Sergey Edunov committed
158
Provides an example of pre-processing for the WMT'14 English to French translation task.
159
160
161

Example usage:

Myle Ott's avatar
Myle Ott committed
162
163
164
165
```bash
cd examples/translation/
bash prepare-wmt14en2fr.sh
cd ../..
166
167

# Binarize the dataset:
Myle Ott's avatar
Myle Ott committed
168
169
170
171
TEXT=examples/translation/wmt14_en_fr
fairseq-preprocess --source-lang en --target-lang fr \
    --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
    --destdir data-bin/wmt14_en_fr --thresholdtgt 0 --thresholdsrc 0
172
173
174

# Train the model:
# If it runs out of memory, try to set --max-tokens 1000 instead
Myle Ott's avatar
Myle Ott committed
175
176
177
178
179
180
mkdir -p checkpoints/fconv_wmt_en_fr
fairseq-train data-bin/wmt14_en_fr \
    --lr 0.5 --clip-norm 0.1 --dropout 0.1 --max-tokens 3000 \
    --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
    --lr-scheduler fixed --force-anneal 50 \
    --arch fconv_wmt_en_fr --save-dir checkpoints/fconv_wmt_en_fr
181
182

# Generate:
Myle Ott's avatar
Myle Ott committed
183
184
fairseq-generate data-bin/fconv_wmt_en_fr \
    --path checkpoints/fconv_wmt_en_fr/checkpoint_best.pt --beam 5 --remove-bpe
185
```
186
187
188
189
190
191
192
193
194
195

## Multilingual Translation

We also support training multilingual translation models. In this example we'll
train a multilingual `{de,fr}-en` translation model using the IWSLT'17 datasets.

Note that we use slightly different preprocessing here than for the IWSLT'14
En-De data above. In particular we learn a joint BPE code for all three
languages and use interactive.py and sacrebleu for scoring the test set.

Myle Ott's avatar
Myle Ott committed
196
```bash
197
# First install sacrebleu and sentencepiece
Myle Ott's avatar
Myle Ott committed
198
pip install sacrebleu sentencepiece
199
200

# Then download and preprocess the data
Myle Ott's avatar
Myle Ott committed
201
202
203
cd examples/translation/
bash prepare-iwslt17-multilingual.sh
cd ../..
204
205

# Binarize the de-en dataset
Myle Ott's avatar
Myle Ott committed
206
207
208
209
210
211
TEXT=examples/translation/iwslt17.de_fr.en.bpe16k
fairseq-preprocess --source-lang de --target-lang en \
    --trainpref $TEXT/train.bpe.de-en --validpref $TEXT/valid.bpe.de-en \
    --joined-dictionary \
    --destdir data-bin/iwslt17.de_fr.en.bpe16k \
    --workers 10
212
213
214

# Binarize the fr-en dataset
# NOTE: it's important to reuse the en dictionary from the previous step
Myle Ott's avatar
Myle Ott committed
215
216
217
218
219
fairseq-preprocess --source-lang fr --target-lang en \
    --trainpref $TEXT/train.bpe.fr-en --validpref $TEXT/valid.bpe.fr-en \
    --joined-dictionary --tgtdict data-bin/iwslt17.de_fr.en.bpe16k/dict.en.txt \
    --destdir data-bin/iwslt17.de_fr.en.bpe16k \
    --workers 10
220
221
222
223

# Train a multilingual transformer model
# NOTE: the command below assumes 1 GPU, but accumulates gradients from
#       8 fwd/bwd passes to simulate training on 8 GPUs
Myle Ott's avatar
Myle Ott committed
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
mkdir -p checkpoints/multilingual_transformer
CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt17.de_fr.en.bpe16k/ \
    --max-epoch 50 \
    --ddp-backend=no_c10d \
    --task multilingual_translation --lang-pairs de-en,fr-en \
    --arch multilingual_transformer_iwslt_de_en \
    --share-decoders --share-decoder-input-output-embed \
    --optimizer adam --adam-betas '(0.9, 0.98)' \
    --lr 0.0005 --lr-scheduler inverse_sqrt --min-lr '1e-09' \
    --warmup-updates 4000 --warmup-init-lr '1e-07' \
    --label-smoothing 0.1 --criterion label_smoothed_cross_entropy \
    --dropout 0.3 --weight-decay 0.0001 \
    --save-dir checkpoints/multilingual_transformer \
    --max-tokens 4000 \
    --update-freq 8
239
240

# Generate and score the test set with sacrebleu
Myle Ott's avatar
Myle Ott committed
241
242
243
244
245
246
SRC=de
sacrebleu --test-set iwslt17 --language-pair ${SRC}-en --echo src \
    | python scripts/spm_encode.py --model examples/translation/iwslt17.de_fr.en.bpe16k/sentencepiece.bpe.model \
    > iwslt17.test.${SRC}-en.${SRC}.bpe
cat iwslt17.test.${SRC}-en.${SRC}.bpe \
    | fairseq-interactive data-bin/iwslt17.de_fr.en.bpe16k/ \
247
248
249
250
      --task multilingual_translation --source-lang ${SRC} --target-lang en \
      --path checkpoints/multilingual_transformer/checkpoint_best.pt \
      --buffer 2000 --batch-size 128 \
      --beam 5 --remove-bpe=sentencepiece \
Myle Ott's avatar
Myle Ott committed
251
252
253
    > iwslt17.test.${SRC}-en.en.sys
grep ^H iwslt17.test.${SRC}-en.en.sys | cut -f3 \
    | sacrebleu --test-set iwslt17 --language-pair ${SRC}-en
254
```
255
256
257
258
259
260

### Argument format during inference
During inference it is required to specify a single `--source-lang` and
`--target-lang`, which indicates the inference langauge direction.
`--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to
the same value as training.