README.md 15.3 KB
Newer Older
1
# BERT (Bidirectional Encoder Representations from Transformers)
2

3
4
**WARNING**: We are on the way to deprecate most of the code in this directory.
Please see
5
[this link](https://github.com/tensorflow/models/blob/master/official/nlp/docs/train.md)
6
7
for the new tutorial and use the new code in `nlp/modeling`. This README is
still correct for this legacy implementation.
8

9
10
11
The academic paper which describes BERT in detail and provides full results on a
number of tasks can be found here: https://arxiv.org/abs/1810.04805.

12
This repository contains TensorFlow 2.x implementation for BERT.
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

## Contents
  * [Contents](#contents)
  * [Pre-trained Models](#pre-trained-models)
    * [Restoring from Checkpoints](#restoring-from-checkpoints)
  * [Set Up](#set-up)
  * [Process Datasets](#process-datasets)
  * [Fine-tuning with BERT](#fine-tuning-with-bert)
    * [Cloud GPUs and TPUs](#cloud-gpus-and-tpus)
    * [Sentence and Sentence-pair Classification Tasks](#sentence-and-sentence-pair-classification-tasks)
    * [SQuAD 1.1](#squad-1.1)


## Pre-trained Models

28
29
30
31
32
33
We released both checkpoints and tf.hub modules as the pretrained models for
fine-tuning. They are TF 2.x compatible and are converted from the checkpoints
released in TF 1.x official BERT repository
[google-research/bert](https://github.com/google-research/bert)
in order to keep consistent with BERT paper.

34

Hongkun Yu's avatar
Hongkun Yu committed
35
36
### Access to Pretrained Checkpoints

37
Pretrained checkpoints can be found in the following links:
Hongkun Yu's avatar
Hongkun Yu committed
38

39
40
41
**Note: We have switched BERT implementation
to use Keras functional-style networks in [nlp/modeling](../modeling).
The new checkpoints are:**
Hongkun Yu's avatar
Hongkun Yu committed
42

43
*   **[`BERT-Large, Uncased (Whole Word Masking)`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/wwm_uncased_L-24_H-1024_A-16.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
44
    24-layer, 1024-hidden, 16-heads, 340M parameters
45
*   **[`BERT-Large, Cased (Whole Word Masking)`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/wwm_cased_L-24_H-1024_A-16.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
46
    24-layer, 1024-hidden, 16-heads, 340M parameters
47
*   **[`BERT-Base, Uncased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
48
    12-layer, 768-hidden, 12-heads, 110M parameters
49
*   **[`BERT-Large, Uncased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
50
    24-layer, 1024-hidden, 16-heads, 340M parameters
51
*   **[`BERT-Base, Cased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/cased_L-12_H-768_A-12.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
52
    12-layer, 768-hidden, 12-heads , 110M parameters
53
*   **[`BERT-Large, Cased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/cased_L-24_H-1024_A-16.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
54
    24-layer, 1024-hidden, 16-heads, 340M parameters
55
56
*   **[`BERT-Base, Multilingual Cased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/multi_cased_L-12_H-768_A-12.tar.gz)**:
    104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
Hongkun Yu's avatar
Hongkun Yu committed
57

Hongkun Yu's avatar
Hongkun Yu committed
58
59
We recommend to host checkpoints on Google Cloud storage buckets when you use
Cloud GPU/TPU.
Hongkun Yu's avatar
Hongkun Yu committed
60

61
62
### Restoring from Checkpoints

63
`tf.train.Checkpoint` is used to manage model checkpoints in TF 2. To restore
64
65
66
67
68
69
70
71
72
73
74
75
weights from provided pre-trained checkpoints, you can use the following code:

```python
init_checkpoint='the pretrained model checkpoint path.'
model=tf.keras.Model() # Bert pre-trained model as feature extractor.
checkpoint = tf.train.Checkpoint(model=model)
checkpoint.restore(init_checkpoint)
```

Checkpoints featuring native serialized Keras models
(i.e. model.load()/load_weights()) will be available soon.

76
77
78
79
80
### Access to Pretrained hub modules.

Pretrained tf.hub modules in TF 2.x SavedModel format can be found in the
following links:

81
*   **[`BERT-Large, Uncased (Whole Word Masking)`](https://tfhub.dev/tensorflow/bert_en_wwm_uncased_L-24_H-1024_A-16/)**:
82
    24-layer, 1024-hidden, 16-heads, 340M parameters
83
*   **[`BERT-Large, Cased (Whole Word Masking)`](https://tfhub.dev/tensorflow/bert_en_wwm_cased_L-24_H-1024_A-16/)**:
84
    24-layer, 1024-hidden, 16-heads, 340M parameters
85
*   **[`BERT-Base, Uncased`](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/)**:
86
    12-layer, 768-hidden, 12-heads, 110M parameters
87
*   **[`BERT-Large, Uncased`](https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/)**:
88
    24-layer, 1024-hidden, 16-heads, 340M parameters
89
*   **[`BERT-Base, Cased`](https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/)**:
90
    12-layer, 768-hidden, 12-heads , 110M parameters
91
*   **[`BERT-Large, Cased`](https://tfhub.dev/tensorflow/bert_en_cased_L-24_H-1024_A-16/)**:
92
    24-layer, 1024-hidden, 16-heads, 340M parameters
93
*   **[`BERT-Base, Multilingual Cased`](https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/)**:
94
    104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
95
*   **[`BERT-Base, Chinese`](https://tfhub.dev/tensorflow/bert_zh_L-12_H-768_A-12/)**:
96
97
98
    Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads,
    110M parameters

99
100
101
102
103
104
105
106
107
## Set Up

```shell
export PYTHONPATH="$PYTHONPATH:/path/to/models"
```

Install `tf-nightly` to get latest updates:

```shell
108
pip install tf-nightly-gpu
109
110
```

A. Unique TensorFlower's avatar
A. Unique TensorFlower committed
111
112
With TPU, GPU support is not necessary. First, you need to create a `tf-nightly`
TPU with [ctpu tool](https://github.com/tensorflow/tpu/tree/master/tools/ctpu):
113
114
115
116
117

```shell
ctpu up -name <instance name> --tf-version=”nightly”
```

A. Unique TensorFlower's avatar
A. Unique TensorFlower committed
118
Second, you need to install TF 2 `tf-nightly` on your VM:
119
120

```shell
121
pip install tf-nightly
122
123
124
125
```

## Process Datasets

126
### Pre-training
127
128

There is no change to generate pre-training data. Please use the script
129
[`../data/create_pretraining_data.py`](../data/create_pretraining_data.py)
Hongkun Yu's avatar
Hongkun Yu committed
130
131
132
which is essentially branched from [BERT research repo](https://github.com/google-research/bert)
to get processed pre-training data and it adapts to TF2 symbols and python3
compatibility.
133

Kyle Ziegler's avatar
Kyle Ziegler committed
134
135
136
137
Running the pre-training script requires an input and output directory, as well as a vocab file.  Note that max_seq_length will need to match the sequence length parameter you specify when you run pre-training.

Example shell script to call create_pretraining_data.py
```
Kyle Ziegler's avatar
Kyle Ziegler committed
138
139
export WORKING_DIR='local disk or cloud location'
export BERT_DIR='local disk or cloud location'
Kyle Ziegler's avatar
Kyle Ziegler committed
140
141
142
python models/official/nlp/data/create_pretraining_data.py \
  --input_file=$WORKING_DIR/input/input.txt \
  --output_file=$WORKING_DIR/output/tf_examples.tfrecord \
Kyle Ziegler's avatar
Kyle Ziegler committed
143
  --vocab_file=$BERT_DIR/wwm_uncased_L-24_H-1024_A-16/vocab.txt \
Kyle Ziegler's avatar
Kyle Ziegler committed
144
  --do_lower_case=True \
Kyle Ziegler's avatar
Kyle Ziegler committed
145
146
  --max_seq_length=512 \
  --max_predictions_per_seq=76 \
Kyle Ziegler's avatar
Kyle Ziegler committed
147
148
149
150
  --masked_lm_prob=0.15 \
  --random_seed=12345 \
  --dupe_factor=5
```
151
152
153
154

### Fine-tuning

To prepare the fine-tuning data for final model training, use the
155
156
157
158
[`../data/create_finetuning_data.py`](../data/create_finetuning_data.py) script.
Resulting datasets in `tf_record` format and training meta data should be later
passed to training or evaluation scripts. The task-specific arguments are
described in following sections:
159

160
161
162
163
164
165
* GLUE

Users can download the
[GLUE data](https://gluebenchmark.com/tasks) by running
[this script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e)
and unpack it to some directory `$GLUE_DIR`.
166
Also, users can download [Pretrained Checkpoint](#access-to-pretrained-checkpoints) and locate on some directory `$BERT_DIR` instead of using checkpoints on Google Cloud Storage.
167
168
169

```shell
export GLUE_DIR=~/glue
170
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
171
172
173

export TASK_NAME=MNLI
export OUTPUT_DIR=gs://some_bucket/datasets
174
python ../data/create_finetuning_data.py \
175
 --input_data_dir=${GLUE_DIR}/${TASK_NAME}/ \
176
 --vocab_file=${BERT_DIR}/vocab.txt \
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
 --train_data_output_path=${OUTPUT_DIR}/${TASK_NAME}_train.tf_record \
 --eval_data_output_path=${OUTPUT_DIR}/${TASK_NAME}_eval.tf_record \
 --meta_data_file_path=${OUTPUT_DIR}/${TASK_NAME}_meta_data \
 --fine_tuning_task_type=classification --max_seq_length=128 \
 --classification_task_name=${TASK_NAME}
```

* SQUAD

The [SQuAD website](https://rajpurkar.github.io/SQuAD-explorer/) contains
detailed information about the SQuAD datasets and evaluation.

The necessary files can be found here:

*   [train-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json)
*   [dev-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json)
*   [evaluate-v1.1.py](https://github.com/allenai/bi-att-flow/blob/master/squad/evaluate-v1.1.py)
*   [train-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json)
*   [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
*   [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)

```shell
export SQUAD_DIR=~/squad
export SQUAD_VERSION=v1.1
201
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
202
203
export OUTPUT_DIR=gs://some_bucket/datasets

204
python ../data/create_finetuning_data.py \
205
 --squad_data_file=${SQUAD_DIR}/train-${SQUAD_VERSION}.json \
206
 --vocab_file=${BERT_DIR}/vocab.txt \
207
208
209
210
211
 --train_data_output_path=${OUTPUT_DIR}/squad_${SQUAD_VERSION}_train.tf_record \
 --meta_data_file_path=${OUTPUT_DIR}/squad_${SQUAD_VERSION}_meta_data \
 --fine_tuning_task_type=squad --max_seq_length=384
```

ChAnYaNG97's avatar
ChAnYaNG97 committed
212
Note: To create fine-tuning data with SQUAD 2.0, you need to add flag `--version_2_with_negative=True`.
chanyang97's avatar
chanyang97 committed
213

214
215
216
217
218
219
220
## Fine-tuning with BERT

### Cloud GPUs and TPUs

* Cloud Storage

The unzipped pre-trained model files can also be found in the Google Cloud
221
Storage folder `gs://cloud-tpu-checkpoints/bert/keras_bert`. For example:
222
223

```shell
224
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
225
226
227
228
229
230
231
232
233
234
235
export MODEL_DIR=gs://some_bucket/my_output_dir
```

Currently, users are able to access to `tf-nightly` TPUs and the following TPU
script should run with `tf-nightly`.

* GPU -> TPU

Just add the following flags to `run_classifier.py` or `run_squad.py`:

```shell
236
  --distribution_strategy=tpu
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
  --tpu=grpc://${TPU_IP_ADDRESS}:8470
```

### Sentence and Sentence-pair Classification Tasks

This example code fine-tunes `BERT-Large` on the Microsoft Research Paraphrase
Corpus (MRPC) corpus, which only contains 3,600 examples and can fine-tune in a
few minutes on most GPUs.

We use the `BERT-Large` (uncased_L-24_H-1024_A-16) as an example throughout the
workflow.
For GPU memory of 16GB or smaller, you may try to use `BERT-Base`
(uncased_L-12_H-768_A-12).

```shell
252
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
253
254
255
256
257
258
259
260
261
export MODEL_DIR=gs://some_bucket/my_output_dir
export GLUE_DIR=gs://some_bucket/datasets
export TASK=MRPC

python run_classifier.py \
  --mode='train_and_eval' \
  --input_meta_data_path=${GLUE_DIR}/${TASK}_meta_data \
  --train_data_path=${GLUE_DIR}/${TASK}_train.tf_record \
  --eval_data_path=${GLUE_DIR}/${TASK}_eval.tf_record \
262
263
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --init_checkpoint=${BERT_DIR}/bert_model.ckpt \
264
265
266
267
268
269
  --train_batch_size=4 \
  --eval_batch_size=4 \
  --steps_per_loop=1 \
  --learning_rate=2e-5 \
  --num_train_epochs=3 \
  --model_dir=${MODEL_DIR} \
270
  --distribution_strategy=mirrored
271
272
```

273
274
275
276
Alternatively, instead of specifying `init_checkpoint`, you can specify
`hub_module_url` to employ a pretraind BERT hub module, e.g.,
` --hub_module_url=https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/1`.

277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
After training a model, to get predictions from the classifier, you can set the
`--mode=predict` and offer the test set tfrecords to `--eval_data_path`.
Output will be created in file called test_results.tsv in the output folder.
Each line will contain output for each sample, columns are the class
probabilities.

```shell
python run_classifier.py \
  --mode='predict' \
  --input_meta_data_path=${GLUE_DIR}/${TASK}_meta_data \
  --eval_data_path=${GLUE_DIR}/${TASK}_eval.tf_record \
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --eval_batch_size=4 \
  --model_dir=${MODEL_DIR} \
  --distribution_strategy=mirrored
```

294
295
296
297
To use TPU, you only need to switch distribution strategy type to `tpu` with TPU
information and use remote storage for model checkpoints.

```shell
298
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
299
300
301
export TPU_IP_ADDRESS='???'
export MODEL_DIR=gs://some_bucket/my_output_dir
export GLUE_DIR=gs://some_bucket/datasets
302
export TASK=MRPC
303
304
305
306
307
308

python run_classifier.py \
  --mode='train_and_eval' \
  --input_meta_data_path=${GLUE_DIR}/${TASK}_meta_data \
  --train_data_path=${GLUE_DIR}/${TASK}_train.tf_record \
  --eval_data_path=${GLUE_DIR}/${TASK}_eval.tf_record \
309
310
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --init_checkpoint=${BERT_DIR}/bert_model.ckpt \
311
312
  --train_batch_size=32 \
  --eval_batch_size=32 \
Hongkun Yu's avatar
Hongkun Yu committed
313
  --steps_per_loop=1000 \
314
315
316
  --learning_rate=2e-5 \
  --num_train_epochs=3 \
  --model_dir=${MODEL_DIR} \
317
  --distribution_strategy=tpu \
318
319
320
  --tpu=grpc://${TPU_IP_ADDRESS}:8470
```

Hongkun Yu's avatar
Hongkun Yu committed
321
322
323
324
Note that, we specify `steps_per_loop=1000` for TPU, because running a loop of
training steps inside a `tf.function` can significantly increase TPU utilization
and callbacks will not be called inside the loop.

325
326
327
328
329
330
331
332
333
334
335
### SQuAD 1.1

The Stanford Question Answering Dataset (SQuAD) is a popular question answering
benchmark dataset. See more in [SQuAD website](https://rajpurkar.github.io/SQuAD-explorer/).

We use the `BERT-Large` (uncased_L-24_H-1024_A-16) as an example throughout the
workflow.
For GPU memory of 16GB or smaller, you may try to use `BERT-Base`
(uncased_L-12_H-768_A-12).

```shell
336
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
337
338
339
340
341
342
343
344
export SQUAD_DIR=gs://some_bucket/datasets
export MODEL_DIR=gs://some_bucket/my_output_dir
export SQUAD_VERSION=v1.1

python run_squad.py \
  --input_meta_data_path=${SQUAD_DIR}/squad_${SQUAD_VERSION}_meta_data \
  --train_data_path=${SQUAD_DIR}/squad_${SQUAD_VERSION}_train.tf_record \
  --predict_file=${SQUAD_DIR}/dev-v1.1.json \
345
346
347
  --vocab_file=${BERT_DIR}/vocab.txt \
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --init_checkpoint=${BERT_DIR}/bert_model.ckpt \
348
349
350
351
352
  --train_batch_size=4 \
  --predict_batch_size=4 \
  --learning_rate=8e-5 \
  --num_train_epochs=2 \
  --model_dir=${MODEL_DIR} \
353
  --distribution_strategy=mirrored
354
355
```

356
357
358
Similarily, you can replace `init_checkpoint` FLAG with `hub_module_url` to
specify a hub module path.

359
360
361
362
`run_squad.py` writes the prediction for `--predict_file` by default. If you set
the `--model=predict` and offer the SQuAD test data, the scripts will generate
the prediction json file.

363
364
365
366
To use TPU, you need switch distribution strategy type to `tpu` with TPU
information.

```shell
367
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
368
369
370
371
372
373
374
375
376
export TPU_IP_ADDRESS='???'
export MODEL_DIR=gs://some_bucket/my_output_dir
export SQUAD_DIR=gs://some_bucket/datasets
export SQUAD_VERSION=v1.1

python run_squad.py \
  --input_meta_data_path=${SQUAD_DIR}/squad_${SQUAD_VERSION}_meta_data \
  --train_data_path=${SQUAD_DIR}/squad_${SQUAD_VERSION}_train.tf_record \
  --predict_file=${SQUAD_DIR}/dev-v1.1.json \
377
378
379
  --vocab_file=${BERT_DIR}/vocab.txt \
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --init_checkpoint=${BERT_DIR}/bert_model.ckpt \
380
381
382
383
  --train_batch_size=32 \
  --learning_rate=8e-5 \
  --num_train_epochs=2 \
  --model_dir=${MODEL_DIR} \
384
  --distribution_strategy=tpu \
385
386
387
388
389
390
391
392
393
394
  --tpu=grpc://${TPU_IP_ADDRESS}:8470
```

The dev set predictions will be saved into a file called predictions.json in the
model_dir:

```shell
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json ./squad/predictions.json
```

395