README.md 8.55 KB
Newer Older
hepj987's avatar
hepj987 committed
1
# bert_tensorflow
hepj987's avatar
hepj987 committed
2
3
4
5
6
7

## 论文

`BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`

[BERT论文pdf地址](https://arxiv.org/pdf/1810.04805.pdf)
hepj987's avatar
hepj987 committed
8

hepj987's avatar
hepj987 committed
9
## 模型介绍
hepj987's avatar
hepj987 committed
10

hepj987's avatar
hepj987 committed
11
12
![bert_model](bert_model.png)

hepj987's avatar
hepj987 committed
13
14
15
```
BERT的全称为Bidirectional Encoder Representation from Transformers,是一个预训练的语言表征模型。它强调了不再像以往一样采用传统的单向语言模型或者把两个单向语言模型进行浅层拼接的方法进行预训练,而是采用新的masked language model(MLM),以致能生成深度的双向语言表征。
```
hepj987's avatar
hepj987 committed
16

hepj987's avatar
hepj987 committed
17
18
19
## 算法原理

![bert](bert.png)
hepj987's avatar
hepj987 committed
20

hepj987's avatar
hepj987 committed
21
22
23
```
以往的预训练模型的结构会受到单向语言模型(从左到右或者从右到左)的限制,因而也限制了模型的表征能力,使其只能获取单方向的上下文信息。而BERT利用MLM进行预训练并且采用深层的双向Transformer组件(单向的Transformer一般被称为Transformer decoder,其每一个token(符号)只会attend到目前往左的token。而双向的Transformer则被称为Transformer encoder,其每一个token会attend到所有的token)来构建整个模型,因此最终生成能融合左右上下文信息的深层双向语言表征。
```
hepj987's avatar
hepj987 committed
24

hepj987's avatar
hepj987 committed
25
## 预训练模型
hepj987's avatar
hepj987 committed
26

hepj987's avatar
hepj987 committed
27
[bert-base-uncace(MNLI分类时使用此模型)](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip)
hepj987's avatar
hepj987 committed
28

hepj987's avatar
hepj987 committed
29
[bert-large-uncase(squad问答使用此模型)](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-24_H-1024_A-16.zip)
hepj987's avatar
hepj987 committed
30

hepj987's avatar
hepj987 committed
31
## 数据集
hepj987's avatar
hepj987 committed
32

hepj987's avatar
hepj987 committed
33
MNLI分类数据集:[MNLI](https://dl.fbaipublicfiles.com/glue/data/MNLI.zip)
hepj987's avatar
hepj987 committed
34
35


hepj987's avatar
hepj987 committed
36
37
38
39
squad问答数据集:[train-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json)[dev-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json)

squad-v1.1 eval脚本:[evaluate-v1.1.py](https://github.com/allenai/bi-att-flow/blob/master/squad/evaluate-v1.1.py)

hepj987's avatar
hepj987 committed
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
`MNLI数据集`

```
├── original
│	├── multinli_1.0_dev_matched.jsonl
│	├── multinli_1.0_dev_matched.txt
│	├── multinli_1.0_dev_mismatched.jsonl
│	├── multinli_1.0_dev_mismatched.txt
│	├── multinli_1.0_train.jsonl
│	└── multinli_1.0_train.txt
├── dev_matched.tsv
├── dev_mismatched.tsv
├── README.txt
├── test_matched.tsv
├── test_mismatched.tsv
└── train.tsv
```

`squadv1.1数据结构`

```
├── dev-v1.1.json
└── train-v1.1.json
```



hepj987's avatar
hepj987 committed
67
68
69
## 环境配置

推荐使用docker方式运行,提供[光源](https://www.sourcefind.cn/#/main-page)镜像,可以dockerpull拉取
hepj987's avatar
hepj987 committed
70
71

```
hepj987's avatar
hepj987 committed
72
docker pull image.sourcefind.cn:5000/dcu/admin/base/tensorflow:2.7.0-centos7.6-dtk-22.10.1-py37-latest
hepj987's avatar
hepj987 committed
73
74
docker run -dit --network=host --name=bert_tensorflow --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=16G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 image.sourcefind.cn:5000/dcu/admin/base/tensorflow:2.7.0-centos7.6-dtk-22.10.1-py37-latest
docker exec -it bert_tensorflow /bin/bash
hepj987's avatar
hepj987 committed
75
76
```

hepj987's avatar
hepj987 committed
77
78
安装过程可能顶掉DCU版本的tensorflow,可以到[开发者社区](https://cancon.hpccube.com:65024/4/main/tensorflow/dtk22.10)下载DCU版本对应包

hepj987's avatar
hepj987 committed
79
```
hepj987's avatar
hepj987 committed
80
pip install requirements.txt
hepj987's avatar
hepj987 committed
81
82
```

hepj987's avatar
hepj987 committed
83
## 训练
hepj987's avatar
hepj987 committed
84

hepj987's avatar
hepj987 committed
85
###  数据转化-MNLI
hepj987's avatar
hepj987 committed
86

hepj987's avatar
hepj987 committed
87
TF2.0版本读取数据需要转化为tf_record格式
hepj987's avatar
hepj987 committed
88
89

```
hepj987's avatar
hepj987 committed
90
python create_finetuning_data.py \
hepj987's avatar
hepj987 committed
91
 --input_data_dir=/public/home/hepj/data/MNLI \
hepj987's avatar
hepj987 committed
92
93
94
95
 --vocab_file=/public/home/hepj/model_source/uncased_L-12_H-768_A-12/vocab.txt \
 --train_data_output_path=/public/home/hepj/MNLI/train.tf_record \
 --eval_data_output_path=/public/home/hepj/MNLI/eval.tf_record \
 --meta_data_file_path=/public/home/hepj/MNLI/meta_data \
hepj987's avatar
hepj987 committed
96
97
98
 --fine_tuning_task_type=classification 
 --max_seq_length=32 \
 --classification_task_name=MNLI
hepj987's avatar
hepj987 committed
99
100
101
102
103
104
105
106
107
108
 
 #参数说明
 --input_data_dir				训练数据路径
 --vocab_file					vocab文件路径
 --train_data_output_path		训练数据保存路径
 --eval_data_output_path		验证数据保存路径
 --fine_tuning_task_type 		fine-tune任务类型
 --do_lower_case				是否进行lower
 --max_seq_length				最大句子长度
 --classification_task_name		分类任务名
hepj987's avatar
hepj987 committed
109
110
```

hepj987's avatar
hepj987 committed
111
### 模型转化-MNLI
hepj987's avatar
hepj987 committed
112
113
114
115
116
117

TF2.7.2与TF1.15.0模型存储、读取格式不同,官网给出的Bert一般是基于TF1.0的模型需要进行模型转化

```
python3 tf2_encoder_checkpoint_converter.py \
--bert_config_file /public/home/hepj/model_source/uncased_L-12_H-768_A-12/bert_config.json \
hepj987's avatar
hepj987 committed
118
119
--checkpoint_to_convert /public/home/hepj/model_source/uncased_L-12_H-768_A-12/bert_model.ckpt \
--converted_checkpoint_path /public/home/hepj/model_source/bert-base-TF2/bert_model.ckpt
hepj987's avatar
hepj987 committed
120

hepj987's avatar
hepj987 committed
121
122
123
124
#参数说明
--bert_config_file			bert模型config文件
--checkpoint_to_convert		需要转换的模型路径
--converted_checkpoint_path	转换后模型路径
hepj987's avatar
hepj987 committed
125
126
127

将转换完后的bert_model.ckpt-1.data-00000-of-00001 改为bert_model.ckpt.data-00000-of-00001
bert_model.ckpt-1.index改为 bert_model.ckpt.index
hepj987's avatar
hepj987 committed
128
129
```

hepj987's avatar
hepj987 committed
130
### 单卡运行-MNLI
hepj987's avatar
hepj987 committed
131
132

```
hepj987's avatar
hepj987 committed
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
sh bert_class.sh
  #参数说明
  --mode						模型模式train_and_eval、export_only、predict		
  --input_meta_data_path		用于训练和验证的元数据
  --train_data_path				训练数据路径
  --eval_data_path				验证数据路径
  --bert_config_file			bert模型config文件
  --init_checkpoint				初始化模型路径
  --train_batch_size			训练批大小
  --eval_batch_size				验证批大小
  --steps_per_loop				打印log间隔
  --learning_rate				学习率
  --num_train_epochs			训练epoch数
  --model_dir					模型保存文件夹
  --distribution_strategy		分布式策略
  --num_gpus					使用gpu数量
```

hepj987's avatar
hepj987 committed
151
### 多卡运行-MNLI
hepj987's avatar
hepj987 committed
152
153

```
hepj987's avatar
hepj987 committed
154
sh bert_class_gpus.sh
hepj987's avatar
hepj987 committed
155
156
```

hepj987's avatar
hepj987 committed
157
### 数据转化-SQUAD1.1
hepj987's avatar
hepj987 committed
158

hepj987's avatar
hepj987 committed
159
TF2.0版本读取数据需要转化为tf_record格式
hepj987's avatar
hepj987 committed
160
161
162
163
164
165

```
python3 create_finetuning_data.py \
 --squad_data_file=/public/home/hepj/model/model_source/sq1.1/train-v1.1.json \
 --vocab_file=/public/home/hepj/model_source/bert-large-uncased-TF2/uncased_L-24_H-1024_A-16/vocab.txt \
 --train_data_output_path=/public/home/hepj/model/tf2.7.0_Bert/squad1.1/train_new.tf_record \
hepj987's avatar
hepj987 committed
166
 --meta_data_file_path=/public/home/hepj/model/tf2.7.0_Bert/squad1.1/meta_data \
hepj987's avatar
hepj987 committed
167
168
169
170
 --eval_data_output_path=/public/home/hepj/model/tf2.7.0_Bert/squad1.1/eval_new.tf_record \
 --fine_tuning_task_type=squad \
 --do_lower_case=Flase \
 --max_seq_length=384
hepj987's avatar
hepj987 committed
171
172
173
174
175
176
177
178
179
 
#参数说明
 --squad_data_file				训练文件路径
 --vocab_file					vocab文件路径
 --train_data_output_path		训练数据保存路径
 --eval_data_output_path		验证数据保存路径
 --fine_tuning_task_type 		fine-tune任务类型
 --do_lower_case				是否进行lower
 --max_seq_length				最大句子长度
hepj987's avatar
hepj987 committed
180
181
```

hepj987's avatar
hepj987 committed
182
### 模型转化-SQUAD1.1
hepj987's avatar
hepj987 committed
183
184
185
186
187

```
python3 tf2_encoder_checkpoint_converter.py \
--bert_config_file /public/home/hepj/model/model_source/uncased_L-24_H-1024_A-16/bert_config.json \
--checkpoint_to_convert /public/home/hepj/model/model_sourceuncased_L-24_H-1024_A-16/bert_model.ckpt \
hepj987's avatar
hepj987 committed
188
--converted_checkpoint_path  /public/home/hepj/model_source/bert-large-TF2/bert_model.ckpt
hepj987's avatar
hepj987 committed
189

hepj987's avatar
hepj987 committed
190
191
192
193
#参数说明
--bert_config_file			bert模型config文件
--checkpoint_to_convert		需要转换的模型路径
--converted_checkpoint_path	转换后模型路径
hepj987's avatar
hepj987 committed
194
195
196

将转换完后的bert_model.ckpt-1.data-00000-of-00001 改为bert_model.ckpt.data-00000-of-00001
bert_model.ckpt-1.index改为 bert_model.ckpt.index
hepj987's avatar
hepj987 committed
197
198
```

hepj987's avatar
hepj987 committed
199
### 单卡运行-SQUAD1.1
hepj987's avatar
hepj987 committed
200
201
202

```
sh bert_squad.sh
hepj987's avatar
hepj987 committed
203
204
205
206
207
208
209
210
  #参数说明
  --mode						模型模式train_and_eval、export_only、predict
  --vocab_file					vocab文件路径
  --input_meta_data_path		用于训练和验证的元数据
  --train_data_path				训练数据路径
  --eval_data_path				验证数据路径
  --bert_config_file			bert模型config文件
  --init_checkpoint				初始化模型路径
hepj987's avatar
hepj987 committed
211
  --train_batch_size			总训练批大小
hepj987's avatar
hepj987 committed
212
213
214
215
216
217
218
219
220
221
  --predict_file				预测文件路径
  --eval_batch_size				验证批大小
  --steps_per_loop				打印log间隔
  --learning_rate				学习率
  --num_train_epochs			训练epoch数
  --model_dir					模型保存文件夹
  --distribution_strategy		分布式策略
  --num_gpus					使用gpu数量
```

hepj987's avatar
hepj987 committed
222
### 多卡运行-SQUAD1.1
hepj987's avatar
hepj987 committed
223
224

```
hepj987's avatar
hepj987 committed
225
sh bert_squad_gpus.sh
hepj987's avatar
hepj987 committed
226
227
```

hepj987's avatar
hepj987 committed
228
## 精度
hepj987's avatar
hepj987 committed
229

hepj987's avatar
hepj987 committed
230
231
232
233
234
235
236
使用单张Z100收敛性如下所示:

|      模型任务      |       训练精度       |
| :----------------: | :------------------: |
| MNLI-class(单卡) | val_accuracy: 0.7387 |
|  squad1.1(单卡)  |  F1-score:0.916378   |

hepj987's avatar
hepj987 committed
237
# 应用场景
hepj987's avatar
hepj987 committed
238
239
240

## 算法类别

hepj987's avatar
hepj987 committed
241
`自然语言处理、文本分类、智能问答`
hepj987's avatar
hepj987 committed
242

hepj987's avatar
hepj987 committed
243
## 热点应用行业
hepj987's avatar
hepj987 committed
244

hepj987's avatar
hepj987 committed
245
`互联网`
hepj987's avatar
hepj987 committed
246

hepj987's avatar
hepj987 committed
247
# 源码仓库及问题反馈
hepj987's avatar
hepj987 committed
248
249
250

https://developer.hpccube.com/codes/modelzoo/bert-tf2

hepj987's avatar
hepj987 committed
251
# 参考
hepj987's avatar
hepj987 committed
252

hepj987's avatar
hepj987 committed
253
https://github.com/tensorflow/models/tree/v2.3.0/official/nlp