README.md 10.9 KB
Newer Older
1
# Neural Machine Translation
2

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
## Pre-trained models

Description | Dataset | Model | Test set(s)
---|---|---|---
Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2) | newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2) <br> newstest2012/2013: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2)
Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2) | newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2)
Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2) | newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2)
Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2) | newstest2014 (shared vocab): <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2)
Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2) | newstest2014 (shared vocab): <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2)
Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381); WMT'18 winner) | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.bz2) | See NOTE in the archive

## Example usage

Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti:
```
$ mkdir -p data-bin
$ curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin
$ curl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin
Myle Ott's avatar
Myle Ott committed
21
$ fairseq-generate data-bin/wmt14.en-fr.newstest2014  \
22
23
24
25
26
27
  --path data-bin/wmt14.en-fr.fconv-py/model.pt \
  --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out
...
| Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s)
| Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)

Myle Ott's avatar
Myle Ott committed
28
# Compute BLEU score
29
30
$ grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys
$ grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref
Myle Ott's avatar
Myle Ott committed
31
$ fairseq-score --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref
32
33
BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)
```
34

35
36
## Preprocessing

37
38
These scripts provide an example of pre-processing data for the NMT task.

39
### prepare-iwslt14.sh
40
41
42
43
44

Provides an example of pre-processing for IWSLT'14 German to English translation task: ["Report on the 11th IWSLT evaluation campaign" by Cettolo et al.](http://workshop2014.iwslt.org/downloads/proceeding.pdf)

Example usage:
```
45
$ cd examples/translation/
46
$ bash prepare-iwslt14.sh
47
$ cd ../..
48
49

# Binarize the dataset:
50
$ TEXT=examples/translation/iwslt14.tokenized.de-en
Myle Ott's avatar
Myle Ott committed
51
$ fairseq-preprocess --source-lang de --target-lang en \
52
53
54
  --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
  --destdir data-bin/iwslt14.tokenized.de-en

55
# Train the model (better for a single GPU setup):
56
$ mkdir -p checkpoints/fconv
Myle Ott's avatar
Myle Ott committed
57
$ CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt14.tokenized.de-en \
58
  --lr 0.25 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \
Runqi Yang's avatar
Runqi Yang committed
59
  --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
Runqi Yang's avatar
Runqi Yang committed
60
  --lr-scheduler fixed --force-anneal 200 \
61
62
63
  --arch fconv_iwslt_de_en --save-dir checkpoints/fconv

# Generate:
Myle Ott's avatar
Myle Ott committed
64
$ fairseq-generate data-bin/iwslt14.tokenized.de-en \
65
66
67
68
69
  --path checkpoints/fconv/checkpoint_best.pt \
  --batch-size 128 --beam 5 --remove-bpe

```

70
71
72
73
74
75
To train transformer model on IWSLT'14 German to English:
```
# Preparation steps are the same as for fconv model.

# Train the model (better for a single GPU setup):
$ mkdir -p checkpoints/transformer
Myle Ott's avatar
Myle Ott committed
76
$ CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt14.tokenized.de-en \
77
78
79
80
81
82
83
84
85
86
87
88
  -a transformer_iwslt_de_en --optimizer adam --lr 0.0005 -s de -t en \
  --label-smoothing 0.1 --dropout 0.3 --max-tokens 4000 \
  --min-lr '1e-09' --lr-scheduler inverse_sqrt --weight-decay 0.0001 \
  --criterion label_smoothed_cross_entropy --max-update 50000 \
  --warmup-updates 4000 --warmup-init-lr '1e-07' \
  --adam-betas '(0.9, 0.98)' --save-dir checkpoints/transformer

# Average 10 latest checkpoints:
$ python scripts/average_checkpoints.py --inputs checkpoints/transformer \
   --num-epoch-checkpoints 10 --output checkpoints/transformer/model.pt

# Generate:
Myle Ott's avatar
Myle Ott committed
89
$ fairseq-generate data-bin/iwslt14.tokenized.de-en \
90
91
92
93
94
  --path checkpoints/transformer/model.pt \
  --batch-size 128 --beam 5 --remove-bpe

```

95
### prepare-wmt14en2de.sh
96

97
98
The WMT English to German dataset can be preprocessed using the `prepare-wmt14en2de.sh` script.
By default it will produce a dataset that was modeled after ["Attention Is All You Need" (Vaswani et al., 2017)](https://arxiv.org/abs/1706.03762), but with news-commentary-v12 data from WMT'17.
99

100
To use only data available in WMT'14 or to replicate results obtained in the original ["Convolutional Sequence to Sequence Learning" (Gehring et al., 2017)](https://arxiv.org/abs/1705.03122) paper, please use the `--icml17` option.
101
102
103
104
105
106
107
108

```
$ bash prepare-wmt14en2de.sh --icml17
```

Example usage:

```
109
$ cd examples/translation/
110
$ bash prepare-wmt14en2de.sh
111
$ cd ../..
112
113

# Binarize the dataset:
114
$ TEXT=examples/translation/wmt17_en_de
Myle Ott's avatar
Myle Ott committed
115
$ fairseq-preprocess --source-lang en --target-lang de \
116
  --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
117
  --destdir data-bin/wmt17_en_de --thresholdtgt 0 --thresholdsrc 0
118
119
120
121

# Train the model:
# If it runs out of memory, try to set --max-tokens 1500 instead
$ mkdir -p checkpoints/fconv_wmt_en_de
122
$ fairseq-train data-bin/wmt17_en_de \
123
  --lr 0.5 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \
Runqi Yang's avatar
Runqi Yang committed
124
125
  --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
  --lr-scheduler fixed --force-anneal 50 \
126
127
128
  --arch fconv_wmt_en_de --save-dir checkpoints/fconv_wmt_en_de

# Generate:
129
$ fairseq-generate data-bin/wmt17_en_de \
130
131
132
133
  --path checkpoints/fconv_wmt_en_de/checkpoint_best.pt --beam 5 --remove-bpe

```

134
### prepare-wmt14en2fr.sh
135

Sergey Edunov's avatar
Sergey Edunov committed
136
Provides an example of pre-processing for the WMT'14 English to French translation task.
137
138
139
140

Example usage:

```
141
$ cd examples/translation/
142
$ bash prepare-wmt14en2fr.sh
143
$ cd ../..
144
145

# Binarize the dataset:
146
$ TEXT=examples/translation/wmt14_en_fr
Myle Ott's avatar
Myle Ott committed
147
$ fairseq-preprocess --source-lang en --target-lang fr \
148
149
150
151
152
153
  --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \
  --destdir data-bin/wmt14_en_fr --thresholdtgt 0 --thresholdsrc 0

# Train the model:
# If it runs out of memory, try to set --max-tokens 1000 instead
$ mkdir -p checkpoints/fconv_wmt_en_fr
Myle Ott's avatar
Myle Ott committed
154
$ fairseq-train data-bin/wmt14_en_fr \
155
  --lr 0.5 --clip-norm 0.1 --dropout 0.1 --max-tokens 3000 \
Runqi Yang's avatar
Runqi Yang committed
156
157
  --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \
  --lr-scheduler fixed --force-anneal 50 \
158
159
160
  --arch fconv_wmt_en_fr --save-dir checkpoints/fconv_wmt_en_fr

# Generate:
Myle Ott's avatar
Myle Ott committed
161
$ fairseq-generate data-bin/fconv_wmt_en_fr \
162
163
164
  --path checkpoints/fconv_wmt_en_fr/checkpoint_best.pt --beam 5 --remove-bpe

```
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232

## Multilingual Translation

We also support training multilingual translation models. In this example we'll
train a multilingual `{de,fr}-en` translation model using the IWSLT'17 datasets.

Note that we use slightly different preprocessing here than for the IWSLT'14
En-De data above. In particular we learn a joint BPE code for all three
languages and use interactive.py and sacrebleu for scoring the test set.

```
# First install sacrebleu and sentencepiece
$ pip install sacrebleu sentencepiece

# Then download and preprocess the data
$ cd examples/translation/
$ bash prepare-iwslt17-multilingual.sh
$ cd ../..

# Binarize the de-en dataset
$ TEXT=examples/translation/iwslt17.de_fr.en.bpe16k
$ fairseq-preprocess --source-lang de --target-lang en \
  --trainpref $TEXT/train.bpe.de-en --validpref $TEXT/valid.bpe.de-en \
  --joined-dictionary \
  --destdir data-bin/iwslt17.de_fr.en.bpe16k \
  --workers 10

# Binarize the fr-en dataset
# NOTE: it's important to reuse the en dictionary from the previous step
$ fairseq-preprocess --source-lang fr --target-lang en \
  --trainpref $TEXT/train.bpe.fr-en --validpref $TEXT/valid.bpe.fr-en \
  --joined-dictionary --tgtdict data-bin/iwslt17.de_fr.en.bpe16k/dict.en.txt \
  --destdir data-bin/iwslt17.de_fr.en.bpe16k \
  --workers 10

# Train a multilingual transformer model
# NOTE: the command below assumes 1 GPU, but accumulates gradients from
#       8 fwd/bwd passes to simulate training on 8 GPUs
$ mkdir -p checkpoints/multilingual_transformer
$ CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt17.de_fr.en.bpe16k/ \
  --max-epoch 50 \
  --ddp-backend=no_c10d \
  --task multilingual_translation --lang-pairs de-en,fr-en \
  --arch multilingual_transformer_iwslt_de_en \
  --share-decoders --share-decoder-input-output-embed \
  --optimizer adam --adam-betas '(0.9, 0.98)' 
  --lr 0.0005 --lr-scheduler inverse_sqrt --min-lr '1e-09' \
  --warmup-updates 4000 --warmup-init-lr '1e-07' \
  --label-smoothing 0.1 --criterion label_smoothed_cross_entropy
  --dropout 0.3 --weight-decay 0.0001 \
  --save-dir checkpoints/multilingual_transformer \
  --max-tokens 4000 \
  --update-freq 8

# Generate and score the test set with sacrebleu
$ SRC=de
$ sacrebleu --test-set iwslt17 --language-pair ${SRC}-en --echo src \
  | python scripts/spm_encode.py --model examples/translation/iwslt17.de_fr.en.bpe16k/sentencepiece.bpe.model \
  > iwslt17.test.${SRC}-en.${SRC}.bpe
$ cat iwslt17.test.${SRC}-en.${SRC}.bpe | fairseq-interactive data-bin/iwslt17.de_fr.en.bpe16k/ \
  --task multilingual_translation --source-lang ${SRC} --target-lang en \
  --path checkpoints/multilingual_transformer/checkpoint_best.pt \
  --buffer 2000 --batch-size 128 \
  --beam 5 --remove-bpe=sentencepiece \
  > iwslt17.test.${SRC}-en.en.sys
$ grep ^H iwslt17.test.${SRC}-en.en.sys | cut -f3 \
  | sacrebleu --test-set iwslt17 --language-pair ${SRC}-en
```