README.md 9.25 KB
Newer Older
Katherine Wu's avatar
Katherine Wu committed
1
# Transformer Translation Model
2
3
4
This is an implementation of the Transformer translation model as described in
the [Attention is All You Need](https://arxiv.org/abs/1706.03762) paper. The
implementation leverages tf.keras and makes sure it is compatible with TF 2.0.
Katherine Wu's avatar
Katherine Wu committed
5
6
7
8
9

## Contents
  * [Contents](#contents)
  * [Walkthrough](#walkthrough)
  * [Detailed instructions](#detailed-instructions)
10
    * [Environment preparation](#environment-preparation)
Katherine Wu's avatar
Katherine Wu committed
11
12
13
14
    * [Download and preprocess datasets](#download-and-preprocess-datasets)
    * [Model training and evaluation](#model-training-and-evaluation)
  * [Implementation overview](#implementation-overview)
    * [Model Definition](#model-definition)
15
    * [Model Trainer](#model-trainer)
Katherine Wu's avatar
Katherine Wu committed
16
17
18
19
    * [Test dataset](#test-dataset)

## Walkthrough

20
21
22
Below are the commands for running the Transformer model. See the
[Detailed instructions](#detailed-instructions) for more details on running the
model.
Katherine Wu's avatar
Katherine Wu committed
23
24

```
25
# Ensure that PYTHONPATH is correctly defined as described in
26
# https://github.com/tensorflow/models/tree/master/official#requirements
27
28
29
export PYTHONPATH="$PYTHONPATH:/path/to/models"

cd /path/to/models/official/transformer/v2
30
31
32

# Export variables
PARAM_SET=big
Katherine Wu's avatar
Katherine Wu committed
33
DATA_DIR=$HOME/transformer/data
34
MODEL_DIR=$HOME/transformer/model_$PARAM_SET
35
VOCAB_FILE=$DATA_DIR/vocab.ende.32768
Katherine Wu's avatar
Katherine Wu committed
36

37
# Download training/evaluation/test datasets
38
python3 data_download.py --data_dir=$DATA_DIR
Katherine Wu's avatar
Katherine Wu committed
39

40
41
42
43
# Train the model for 100000 steps and evaluate every 5000 steps on a single GPU.
# Each train step, takes 4096 tokens as a batch budget with 64 as sequence
# maximal length.
python3 transformer_main.py --data_dir=$DATA_DIR --model_dir=$MODEL_DIR \
44
    --vocab_file=$VOCAB_FILE --param_set=$PARAM_SET \
45
46
47
48
49
50
    --train_steps=100000 --steps_between_evals=5000 \
    --batch_size=4096 --max_length=64 \
    --bleu_source=$DATA_DIR/newstest2014.en \
    --bleu_ref=$DATA_DIR/newstest2014.de \
    --num_gpus=1 \
    --enable_time_history=false
Katherine Wu's avatar
Katherine Wu committed
51
52
53
54
55
56
57
58
59

# Run during training in a separate process to get continuous updates,
# or after training is complete.
tensorboard --logdir=$MODEL_DIR
```

## Detailed instructions


60
61
62
0. ### Environment preparation

   #### Add models repo to PYTHONPATH
63
   Follow the instructions described in the [Requirements](https://github.com/tensorflow/models/tree/master/official#requirements) section to add the models folder to the python path.
64
65

   #### Export variables (optional)
Katherine Wu's avatar
Katherine Wu committed
66
67

   Export the following variables, or modify the values in each of the snippets below:
68
69

   ```shell
70
   PARAM_SET=big
Katherine Wu's avatar
Katherine Wu committed
71
   DATA_DIR=$HOME/transformer/data
72
   MODEL_DIR=$HOME/transformer/model_$PARAM_SET
73
   VOCAB_FILE=$DATA_DIR/vocab.ende.32768
Katherine Wu's avatar
Katherine Wu committed
74
75
76
77
78
79
80
81
82
83
   ```

1. ### Download and preprocess datasets

   [data_download.py](data_download.py) downloads and preprocesses the training and evaluation WMT datasets. After the data is downloaded and extracted, the training data is used to generate a vocabulary of subtokens. The evaluation and training strings are tokenized, and the resulting data is sharded, shuffled, and saved as TFRecords.

   1.75GB of compressed data will be downloaded. In total, the raw files (compressed, extracted, and combined files) take up 8.4GB of disk space. The resulting TFRecord and vocabulary files are 722MB. The script takes around 40 minutes to run, with the bulk of the time spent downloading and ~15 minutes spent on preprocessing.

   Command to run:
   ```
84
   python3 data_download.py --data_dir=$DATA_DIR
Katherine Wu's avatar
Katherine Wu committed
85
86
87
88
89
90
91
92
   ```

   Arguments:
   * `--data_dir`: Path where the preprocessed TFRecord data, and vocab file will be saved.
   * Use the `--help` or `-h` flag to get a full list of possible arguments.

2. ### Model training and evaluation

93
94
95
96
97
98
99
100
101
102
   [transformer_main.py](v2/transformer_main.py) creates a Transformer keras model,
   and trains it uses keras model.fit().

   Users need to adjust `batch_size` and `num_gpus` to get good performance
   running multiple GPUs.

   **Note that:**
   when using multiple GPUs or TPUs, this is the global batch size for all
   devices. For example, if the batch size is `4096*4` and there are 4 devices,
   each device will take 4096 tokens as a batch budget.
Katherine Wu's avatar
Katherine Wu committed
103
104
105

   Command to run:
   ```
106
   python3 transformer_main.py --data_dir=$DATA_DIR --model_dir=$MODEL_DIR \
107
       --vocab_file=$VOCAB_FILE --param_set=$PARAM_SET
Katherine Wu's avatar
Katherine Wu committed
108
109
110
111
112
   ```

   Arguments:
   * `--data_dir`: This should be set to the same directory given to the `data_download`'s `data_dir` argument.
   * `--model_dir`: Directory to save Transformer model training checkpoints.
113
   * `--vocab_file`: Path to subtoken vocabulary file. If data_download was used, you may find the file in `data_dir`.
114
   * `--param_set`: Parameter set to use when creating and training the model. Options are `base` and `big` (default).
115
116
117
   * `--enable_time_history`: Whether add TimeHistory call. If so, --log_steps must be specified.
   * `--batch_size`: The number of tokens to consider in a batch. Combining with
     `--max_length`, they decide how many sequences are used per batch.
Katherine Wu's avatar
Katherine Wu committed
118
119
   * Use the `--help` or `-h` flag to get a full list of possible arguments.

120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
    #### Using multiple GPUs
    You can train these models on multiple GPUs using `tf.distribute.Strategy` API.
    You can read more about them in this
    [guide](https://www.tensorflow.org/guide/distribute_strategy).

    In this example, we have made it easier to use is with just a command line flag
    `--num_gpus`. By default this flag is 1 if TensorFlow is compiled with CUDA,
    and 0 otherwise.

    - --num_gpus=0: Uses tf.distribute.OneDeviceStrategy with CPU as the device.
    - --num_gpus=1: Uses tf.distribute.OneDeviceStrategy with GPU as the device.
    - --num_gpus=2+: Uses tf.distribute.MirroredStrategy to run synchronous
    distributed training across the GPUs.

   #### Using TPUs

   Note: This model will **not** work with TPUs on Colab.

   You can train the Transformer model on Cloud TPUs using
   `tf.distribute.TPUStrategy`. If you are not familiar with Cloud TPUs, it is
   strongly recommended that you go through the
   [quickstart](https://cloud.google.com/tpu/docs/quickstart) to learn how to
   create a TPU and GCE VM.

   To run the Transformer model on a TPU, you must set
   `--distribution_strategy=tpu`, `--tpu=$TPU_NAME`, and `--use_ctl=True` where
   `$TPU_NAME` the name of your TPU in the Cloud Console.

   An example command to run Transformer on a v2-8 or v3-8 TPU would be:

   ```bash
   python transformer_main.py \
     --tpu=$TPU_NAME \
     --model_dir=$MODEL_DIR \
     --data_dir=$DATA_DIR \
     --vocab_file=$DATA_DIR/vocab.ende.32768 \
     --bleu_source=$DATA_DIR/newstest2014.en \
     --bleu_ref=$DATA_DIR/newstest2014.end \
     --batch_size=6144 \
     --train_steps=2000 \
     --static_batch=true \
     --use_ctl=true \
     --param_set=big \
     --max_length=64 \
     --decode_batch_size=32 \
     --decode_max_length=97 \
     --padded_decode=true \
     --distribution_strategy=tpu
   ```
   Note: `$MODEL_DIR` and `$DATA_DIR` must be GCS paths.

Katherine Wu's avatar
Katherine Wu committed
171
172
173
   #### Customizing training schedule

   By default, the model will train for 10 epochs, and evaluate after every epoch. The training schedule may be defined through the flags:
174

Katherine Wu's avatar
Katherine Wu committed
175
176
   * Training with steps:
     * `--train_steps`: sets the total number of training steps to run.
177
     * `--steps_between_evals`: Number of training steps to run between evaluations.
Katherine Wu's avatar
Katherine Wu committed
178
179
180
181

   #### Compute BLEU score during model evaluation

   Use these flags to compute the BLEU when the model evaluates:
182

Katherine Wu's avatar
Katherine Wu committed
183
184
185
   * `--bleu_source`: Path to file containing text to translate.
   * `--bleu_ref`: Path to file containing the reference translation.

186
   When running `transformer_main.py`, use the flags: `--bleu_source=$DATA_DIR/newstest2014.en --bleu_ref=$DATA_DIR/newstest2014.de`
Katherine Wu's avatar
Katherine Wu committed
187
188
189
190
191
192
193
194
195
196
197
198
199

   #### Tensorboard
   Training and evaluation metrics (loss, accuracy, approximate BLEU score, etc.) are logged, and can be displayed in the browser using Tensorboard.
   ```
   tensorboard --logdir=$MODEL_DIR
   ```
   The values are displayed at [localhost:6006](localhost:6006).

## Implementation overview

A brief look at each component in the code:

### Model Definition
200
201
202
203
* [transformer.py](v2/transformer.py): Defines a tf.keras.Model: `Transformer`.
* [embedding_layer.py](v2/embedding_layer.py): Contains the layer that calculates the embeddings. The embedding weights are also used to calculate the pre-softmax probabilities from the decoder output.
* [attention_layer.py](v2/attention_layer.py): Defines the multi-headed and self attention layers that are used in the encoder/decoder stacks.
* [ffn_layer.py](v2/ffn_layer.py): Defines the feedforward network that is used in the encoder/decoder stacks. The network is composed of 2 fully connected layers.
Katherine Wu's avatar
Katherine Wu committed
204
205

Other files:
206
* [beam_search.py](v2/beam_search.py) contains the beam search implementation, which is used during model inference to find high scoring translations.
Katherine Wu's avatar
Katherine Wu committed
207

208
209
### Model Trainer
[transformer_main.py](v2/transformer_main.py) creates an `TransformerTask` to train and evaluate the model using tf.keras.
Katherine Wu's avatar
Katherine Wu committed
210
211

### Test dataset
212
213
214
215
The [newstest2014 files](https://storage.googleapis.com/tf-perf-public/official_transformer/test_data/newstest2014.tgz)
are extracted from the [NMT Seq2Seq tutorial](https://google.github.io/seq2seq/nmt/#download-data).
The raw text files are converted from the SGM format of the
[WMT 2016](http://www.statmt.org/wmt16/translation-task.html) test sets. The
216
newstest2014 files are put into the `$DATA_DIR` when executing `data_download.py`