README.md 15.2 KB
Newer Older
1
# BERT (Bidirectional Encoder Representations from Transformers)
2

3
4
**WARNING**: We are on the way to deprecate most of the code in this directory.
Please see
5
[this link](https://github.com/tensorflow/models/blob/master/official/nlp/docs/train.md)
6
7
for the new tutorial.

8
9
10
The academic paper which describes BERT in detail and provides full results on a
number of tasks can be found here: https://arxiv.org/abs/1810.04805.

11
This repository contains TensorFlow 2.x implementation for BERT.
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

## Contents
  * [Contents](#contents)
  * [Pre-trained Models](#pre-trained-models)
    * [Restoring from Checkpoints](#restoring-from-checkpoints)
  * [Set Up](#set-up)
  * [Process Datasets](#process-datasets)
  * [Fine-tuning with BERT](#fine-tuning-with-bert)
    * [Cloud GPUs and TPUs](#cloud-gpus-and-tpus)
    * [Sentence and Sentence-pair Classification Tasks](#sentence-and-sentence-pair-classification-tasks)
    * [SQuAD 1.1](#squad-1.1)


## Pre-trained Models

27
28
29
30
31
32
We released both checkpoints and tf.hub modules as the pretrained models for
fine-tuning. They are TF 2.x compatible and are converted from the checkpoints
released in TF 1.x official BERT repository
[google-research/bert](https://github.com/google-research/bert)
in order to keep consistent with BERT paper.

33

Hongkun Yu's avatar
Hongkun Yu committed
34
35
### Access to Pretrained Checkpoints

36
Pretrained checkpoints can be found in the following links:
Hongkun Yu's avatar
Hongkun Yu committed
37

38
39
40
**Note: We have switched BERT implementation
to use Keras functional-style networks in [nlp/modeling](../modeling).
The new checkpoints are:**
Hongkun Yu's avatar
Hongkun Yu committed
41

42
*   **[`BERT-Large, Uncased (Whole Word Masking)`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/wwm_uncased_L-24_H-1024_A-16.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
43
    24-layer, 1024-hidden, 16-heads, 340M parameters
44
*   **[`BERT-Large, Cased (Whole Word Masking)`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/wwm_cased_L-24_H-1024_A-16.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
45
    24-layer, 1024-hidden, 16-heads, 340M parameters
46
*   **[`BERT-Base, Uncased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/uncased_L-12_H-768_A-12.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
47
    12-layer, 768-hidden, 12-heads, 110M parameters
48
*   **[`BERT-Large, Uncased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
49
    24-layer, 1024-hidden, 16-heads, 340M parameters
50
*   **[`BERT-Base, Cased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/cased_L-12_H-768_A-12.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
51
    12-layer, 768-hidden, 12-heads , 110M parameters
52
*   **[`BERT-Large, Cased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/cased_L-24_H-1024_A-16.tar.gz)**:
Hongkun Yu's avatar
Hongkun Yu committed
53
    24-layer, 1024-hidden, 16-heads, 340M parameters
54
55
*   **[`BERT-Base, Multilingual Cased`](https://storage.googleapis.com/cloud-tpu-checkpoints/bert/keras_bert/multi_cased_L-12_H-768_A-12.tar.gz)**:
    104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
Hongkun Yu's avatar
Hongkun Yu committed
56

Hongkun Yu's avatar
Hongkun Yu committed
57
58
We recommend to host checkpoints on Google Cloud storage buckets when you use
Cloud GPU/TPU.
Hongkun Yu's avatar
Hongkun Yu committed
59

60
61
### Restoring from Checkpoints

62
`tf.train.Checkpoint` is used to manage model checkpoints in TF 2. To restore
63
64
65
66
67
68
69
70
71
72
73
74
weights from provided pre-trained checkpoints, you can use the following code:

```python
init_checkpoint='the pretrained model checkpoint path.'
model=tf.keras.Model() # Bert pre-trained model as feature extractor.
checkpoint = tf.train.Checkpoint(model=model)
checkpoint.restore(init_checkpoint)
```

Checkpoints featuring native serialized Keras models
(i.e. model.load()/load_weights()) will be available soon.

75
76
77
78
79
### Access to Pretrained hub modules.

Pretrained tf.hub modules in TF 2.x SavedModel format can be found in the
following links:

80
*   **[`BERT-Large, Uncased (Whole Word Masking)`](https://tfhub.dev/tensorflow/bert_en_wwm_uncased_L-24_H-1024_A-16/)**:
81
    24-layer, 1024-hidden, 16-heads, 340M parameters
82
*   **[`BERT-Large, Cased (Whole Word Masking)`](https://tfhub.dev/tensorflow/bert_en_wwm_cased_L-24_H-1024_A-16/)**:
83
    24-layer, 1024-hidden, 16-heads, 340M parameters
84
*   **[`BERT-Base, Uncased`](https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/)**:
85
    12-layer, 768-hidden, 12-heads, 110M parameters
86
*   **[`BERT-Large, Uncased`](https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/)**:
87
    24-layer, 1024-hidden, 16-heads, 340M parameters
88
*   **[`BERT-Base, Cased`](https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/)**:
89
    12-layer, 768-hidden, 12-heads , 110M parameters
90
*   **[`BERT-Large, Cased`](https://tfhub.dev/tensorflow/bert_en_cased_L-24_H-1024_A-16/)**:
91
    24-layer, 1024-hidden, 16-heads, 340M parameters
92
*   **[`BERT-Base, Multilingual Cased`](https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/)**:
93
    104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
94
*   **[`BERT-Base, Chinese`](https://tfhub.dev/tensorflow/bert_zh_L-12_H-768_A-12/)**:
95
96
97
    Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads,
    110M parameters

98
99
100
101
102
103
104
105
106
## Set Up

```shell
export PYTHONPATH="$PYTHONPATH:/path/to/models"
```

Install `tf-nightly` to get latest updates:

```shell
107
pip install tf-nightly-gpu
108
109
```

A. Unique TensorFlower's avatar
A. Unique TensorFlower committed
110
111
With TPU, GPU support is not necessary. First, you need to create a `tf-nightly`
TPU with [ctpu tool](https://github.com/tensorflow/tpu/tree/master/tools/ctpu):
112
113
114
115
116

```shell
ctpu up -name <instance name> --tf-version=”nightly”
```

A. Unique TensorFlower's avatar
A. Unique TensorFlower committed
117
Second, you need to install TF 2 `tf-nightly` on your VM:
118
119

```shell
120
pip install tf-nightly
121
122
123
124
```

## Process Datasets

125
### Pre-training
126
127

There is no change to generate pre-training data. Please use the script
128
[`../data/create_pretraining_data.py`](../data/create_pretraining_data.py)
Hongkun Yu's avatar
Hongkun Yu committed
129
130
131
which is essentially branched from [BERT research repo](https://github.com/google-research/bert)
to get processed pre-training data and it adapts to TF2 symbols and python3
compatibility.
132

Kyle Ziegler's avatar
Kyle Ziegler committed
133
134
135
136
Running the pre-training script requires an input and output directory, as well as a vocab file.  Note that max_seq_length will need to match the sequence length parameter you specify when you run pre-training.

Example shell script to call create_pretraining_data.py
```
Kyle Ziegler's avatar
Kyle Ziegler committed
137
138
export WORKING_DIR='local disk or cloud location'
export BERT_DIR='local disk or cloud location'
Kyle Ziegler's avatar
Kyle Ziegler committed
139
140
141
python models/official/nlp/data/create_pretraining_data.py \
  --input_file=$WORKING_DIR/input/input.txt \
  --output_file=$WORKING_DIR/output/tf_examples.tfrecord \
Kyle Ziegler's avatar
Kyle Ziegler committed
142
  --vocab_file=$BERT_DIR/wwm_uncased_L-24_H-1024_A-16/vocab.txt \
Kyle Ziegler's avatar
Kyle Ziegler committed
143
  --do_lower_case=True \
Kyle Ziegler's avatar
Kyle Ziegler committed
144
145
  --max_seq_length=512 \
  --max_predictions_per_seq=76 \
Kyle Ziegler's avatar
Kyle Ziegler committed
146
147
148
149
  --masked_lm_prob=0.15 \
  --random_seed=12345 \
  --dupe_factor=5
```
150
151
152
153

### Fine-tuning

To prepare the fine-tuning data for final model training, use the
154
155
156
157
[`../data/create_finetuning_data.py`](../data/create_finetuning_data.py) script.
Resulting datasets in `tf_record` format and training meta data should be later
passed to training or evaluation scripts. The task-specific arguments are
described in following sections:
158

159
160
161
162
163
164
* GLUE

Users can download the
[GLUE data](https://gluebenchmark.com/tasks) by running
[this script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e)
and unpack it to some directory `$GLUE_DIR`.
165
Also, users can download [Pretrained Checkpoint](#access-to-pretrained-checkpoints) and locate on some directory `$BERT_DIR` instead of using checkpoints on Google Cloud Storage.
166
167
168

```shell
export GLUE_DIR=~/glue
169
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
170
171
172

export TASK_NAME=MNLI
export OUTPUT_DIR=gs://some_bucket/datasets
173
python ../data/create_finetuning_data.py \
174
 --input_data_dir=${GLUE_DIR}/${TASK_NAME}/ \
175
 --vocab_file=${BERT_DIR}/vocab.txt \
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
 --train_data_output_path=${OUTPUT_DIR}/${TASK_NAME}_train.tf_record \
 --eval_data_output_path=${OUTPUT_DIR}/${TASK_NAME}_eval.tf_record \
 --meta_data_file_path=${OUTPUT_DIR}/${TASK_NAME}_meta_data \
 --fine_tuning_task_type=classification --max_seq_length=128 \
 --classification_task_name=${TASK_NAME}
```

* SQUAD

The [SQuAD website](https://rajpurkar.github.io/SQuAD-explorer/) contains
detailed information about the SQuAD datasets and evaluation.

The necessary files can be found here:

*   [train-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json)
*   [dev-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json)
*   [evaluate-v1.1.py](https://github.com/allenai/bi-att-flow/blob/master/squad/evaluate-v1.1.py)
*   [train-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json)
*   [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
*   [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)

```shell
export SQUAD_DIR=~/squad
export SQUAD_VERSION=v1.1
200
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
201
202
export OUTPUT_DIR=gs://some_bucket/datasets

203
python ../data/create_finetuning_data.py \
204
 --squad_data_file=${SQUAD_DIR}/train-${SQUAD_VERSION}.json \
205
 --vocab_file=${BERT_DIR}/vocab.txt \
206
207
208
209
210
 --train_data_output_path=${OUTPUT_DIR}/squad_${SQUAD_VERSION}_train.tf_record \
 --meta_data_file_path=${OUTPUT_DIR}/squad_${SQUAD_VERSION}_meta_data \
 --fine_tuning_task_type=squad --max_seq_length=384
```

ChAnYaNG97's avatar
ChAnYaNG97 committed
211
Note: To create fine-tuning data with SQUAD 2.0, you need to add flag `--version_2_with_negative=True`.
chanyang97's avatar
chanyang97 committed
212

213
214
215
216
217
218
219
## Fine-tuning with BERT

### Cloud GPUs and TPUs

* Cloud Storage

The unzipped pre-trained model files can also be found in the Google Cloud
220
Storage folder `gs://cloud-tpu-checkpoints/bert/keras_bert`. For example:
221
222

```shell
223
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
224
225
226
227
228
229
230
231
232
233
234
export MODEL_DIR=gs://some_bucket/my_output_dir
```

Currently, users are able to access to `tf-nightly` TPUs and the following TPU
script should run with `tf-nightly`.

* GPU -> TPU

Just add the following flags to `run_classifier.py` or `run_squad.py`:

```shell
235
  --distribution_strategy=tpu
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
  --tpu=grpc://${TPU_IP_ADDRESS}:8470
```

### Sentence and Sentence-pair Classification Tasks

This example code fine-tunes `BERT-Large` on the Microsoft Research Paraphrase
Corpus (MRPC) corpus, which only contains 3,600 examples and can fine-tune in a
few minutes on most GPUs.

We use the `BERT-Large` (uncased_L-24_H-1024_A-16) as an example throughout the
workflow.
For GPU memory of 16GB or smaller, you may try to use `BERT-Base`
(uncased_L-12_H-768_A-12).

```shell
251
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
252
253
254
255
256
257
258
259
260
export MODEL_DIR=gs://some_bucket/my_output_dir
export GLUE_DIR=gs://some_bucket/datasets
export TASK=MRPC

python run_classifier.py \
  --mode='train_and_eval' \
  --input_meta_data_path=${GLUE_DIR}/${TASK}_meta_data \
  --train_data_path=${GLUE_DIR}/${TASK}_train.tf_record \
  --eval_data_path=${GLUE_DIR}/${TASK}_eval.tf_record \
261
262
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --init_checkpoint=${BERT_DIR}/bert_model.ckpt \
263
264
265
266
267
268
  --train_batch_size=4 \
  --eval_batch_size=4 \
  --steps_per_loop=1 \
  --learning_rate=2e-5 \
  --num_train_epochs=3 \
  --model_dir=${MODEL_DIR} \
269
  --distribution_strategy=mirrored
270
271
```

272
273
274
275
Alternatively, instead of specifying `init_checkpoint`, you can specify
`hub_module_url` to employ a pretraind BERT hub module, e.g.,
` --hub_module_url=https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/1`.

276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
After training a model, to get predictions from the classifier, you can set the
`--mode=predict` and offer the test set tfrecords to `--eval_data_path`.
Output will be created in file called test_results.tsv in the output folder.
Each line will contain output for each sample, columns are the class
probabilities.

```shell
python run_classifier.py \
  --mode='predict' \
  --input_meta_data_path=${GLUE_DIR}/${TASK}_meta_data \
  --eval_data_path=${GLUE_DIR}/${TASK}_eval.tf_record \
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --eval_batch_size=4 \
  --model_dir=${MODEL_DIR} \
  --distribution_strategy=mirrored
```

293
294
295
296
To use TPU, you only need to switch distribution strategy type to `tpu` with TPU
information and use remote storage for model checkpoints.

```shell
297
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
298
299
300
export TPU_IP_ADDRESS='???'
export MODEL_DIR=gs://some_bucket/my_output_dir
export GLUE_DIR=gs://some_bucket/datasets
301
export TASK=MRPC
302
303
304
305
306
307

python run_classifier.py \
  --mode='train_and_eval' \
  --input_meta_data_path=${GLUE_DIR}/${TASK}_meta_data \
  --train_data_path=${GLUE_DIR}/${TASK}_train.tf_record \
  --eval_data_path=${GLUE_DIR}/${TASK}_eval.tf_record \
308
309
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --init_checkpoint=${BERT_DIR}/bert_model.ckpt \
310
311
  --train_batch_size=32 \
  --eval_batch_size=32 \
Hongkun Yu's avatar
Hongkun Yu committed
312
  --steps_per_loop=1000 \
313
314
315
  --learning_rate=2e-5 \
  --num_train_epochs=3 \
  --model_dir=${MODEL_DIR} \
316
  --distribution_strategy=tpu \
317
318
319
  --tpu=grpc://${TPU_IP_ADDRESS}:8470
```

Hongkun Yu's avatar
Hongkun Yu committed
320
321
322
323
Note that, we specify `steps_per_loop=1000` for TPU, because running a loop of
training steps inside a `tf.function` can significantly increase TPU utilization
and callbacks will not be called inside the loop.

324
325
326
327
328
329
330
331
332
333
334
### SQuAD 1.1

The Stanford Question Answering Dataset (SQuAD) is a popular question answering
benchmark dataset. See more in [SQuAD website](https://rajpurkar.github.io/SQuAD-explorer/).

We use the `BERT-Large` (uncased_L-24_H-1024_A-16) as an example throughout the
workflow.
For GPU memory of 16GB or smaller, you may try to use `BERT-Base`
(uncased_L-12_H-768_A-12).

```shell
335
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
336
337
338
339
340
341
342
343
export SQUAD_DIR=gs://some_bucket/datasets
export MODEL_DIR=gs://some_bucket/my_output_dir
export SQUAD_VERSION=v1.1

python run_squad.py \
  --input_meta_data_path=${SQUAD_DIR}/squad_${SQUAD_VERSION}_meta_data \
  --train_data_path=${SQUAD_DIR}/squad_${SQUAD_VERSION}_train.tf_record \
  --predict_file=${SQUAD_DIR}/dev-v1.1.json \
344
345
346
  --vocab_file=${BERT_DIR}/vocab.txt \
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --init_checkpoint=${BERT_DIR}/bert_model.ckpt \
347
348
349
350
351
  --train_batch_size=4 \
  --predict_batch_size=4 \
  --learning_rate=8e-5 \
  --num_train_epochs=2 \
  --model_dir=${MODEL_DIR} \
352
  --distribution_strategy=mirrored
353
354
```

355
356
357
Similarily, you can replace `init_checkpoint` FLAG with `hub_module_url` to
specify a hub module path.

358
359
360
361
`run_squad.py` writes the prediction for `--predict_file` by default. If you set
the `--model=predict` and offer the SQuAD test data, the scripts will generate
the prediction json file.

362
363
364
365
To use TPU, you need switch distribution strategy type to `tpu` with TPU
information.

```shell
366
export BERT_DIR=gs://cloud-tpu-checkpoints/bert/keras_bert/uncased_L-24_H-1024_A-16
367
368
369
370
371
372
373
374
375
export TPU_IP_ADDRESS='???'
export MODEL_DIR=gs://some_bucket/my_output_dir
export SQUAD_DIR=gs://some_bucket/datasets
export SQUAD_VERSION=v1.1

python run_squad.py \
  --input_meta_data_path=${SQUAD_DIR}/squad_${SQUAD_VERSION}_meta_data \
  --train_data_path=${SQUAD_DIR}/squad_${SQUAD_VERSION}_train.tf_record \
  --predict_file=${SQUAD_DIR}/dev-v1.1.json \
376
377
378
  --vocab_file=${BERT_DIR}/vocab.txt \
  --bert_config_file=${BERT_DIR}/bert_config.json \
  --init_checkpoint=${BERT_DIR}/bert_model.ckpt \
379
380
381
382
  --train_batch_size=32 \
  --learning_rate=8e-5 \
  --num_train_epochs=2 \
  --model_dir=${MODEL_DIR} \
383
  --distribution_strategy=tpu \
384
385
386
387
388
389
390
391
392
393
  --tpu=grpc://${TPU_IP_ADDRESS}:8470
```

The dev set predictions will be saved into a file called predictions.json in the
model_dir:

```shell
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json ./squad/predictions.json
```

394