README.md 6 KB
Newer Older
hepj987's avatar
hepj987 committed
1
# Generative Pre-Training2(GPT2)
hepj987's avatar
hepj987 committed
2

hepj987's avatar
hepj987 committed
3
### 模型介绍
hepj987's avatar
hepj987 committed
4
5

```
hepj987's avatar
hepj987 committed
6
GPT2模型:第二代生成式预训练模型(Generative Pre-Training2)。
hepj987's avatar
hepj987 committed
7
8
```

hepj987's avatar
hepj987 committed
9
### 模型结构
hepj987's avatar
hepj987 committed
10
11

```
hepj987's avatar
hepj987 committed
12
GPT2使用 Transformer 的 Decoder 结构,并对 Transformer Decoder 进行了一些改动,并通过Megatron和deepspeed进行分布式运行
hepj987's avatar
hepj987 committed
13
14
```

hepj987's avatar
hepj987 committed
15
### 数据集
hepj987's avatar
hepj987 committed
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

```
wget https://huggingface.co/bigscience/misc-test-data/resolve/main/stas/oscar-1GB.jsonl.xz
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
xz -d oscar-1GB.jsonl.xz
python tools/preprocess_data.py \
    --input oscar-1GB.jsonl \
    --output-prefix my-gpt2 \
    --vocab gpt2-vocab.json \
    --dataset-impl mmap \
    --tokenizer-type GPT2BPETokenizer \
    --merge-file gpt2-merges.txt \
    --append-eod \
    --workers 8
```

```
hepj987's avatar
hepj987 committed
34
35
docker run -it --network=host --name=gpt2-hepj --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=16G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1  -v /public/hepj/megatron-deepspeed_dtk22.10:/home/megatron-deepspeed_dtk22.10 image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.10.0-centos7.6-dtk-22.10.1-py37-latest
```
hepj987's avatar
hepj987 committed
36

hepj987's avatar
hepj987 committed
37
## GPT2预训练
hepj987's avatar
hepj987 committed
38

hepj987's avatar
hepj987 committed
39
### 环境配置
hepj987's avatar
hepj987 committed
40

hepj987's avatar
hepj987 committed
41
推荐使用docker方式运行,提供[光源](https://www.sourcefind.cn/#/service-details)拉取的docker镜像:
hepj987's avatar
hepj987 committed
42
43

```
hepj987's avatar
hepj987 committed
44
docker pull image.sourcefind.cn:5000/dcu/admin/base/vscode-pytorch:1.10.0-centos7.6-dtk-22.10-py37-latest
hepj987's avatar
hepj987 committed
45
46
```

hepj987's avatar
hepj987 committed
47
进入docker
hepj987's avatar
hepj987 committed
48
49

```
hepj987's avatar
hepj987 committed
50
pip install -r requirements.txt  -i http://pypi.tuna.tsinghua.edu.cn/simple  --trusted-host pypi.tuna.tsinghua.edu.cn
hepj987's avatar
hepj987 committed
51
52
```

hepj987's avatar
hepj987 committed
53
### 训练(单节点)
hepj987's avatar
hepj987 committed
54
55

```
hepj987's avatar
hepj987 committed
56
57
58
rm megatron/arguments.py
cp megatron/arguments.py-one_node megatron/arguments.py
sh run-train.sh(基于单节点四卡)
hepj987's avatar
hepj987 committed
59
60
```

hepj987's avatar
hepj987 committed
61
62
63
64
65
66
67
```
#重要参数
MODEL_NAME 					模型名(自定义)
CHECKPOINT_PATH				模型保存&加载路径
DATA_PATH					数据集路径(转换后的)
TENSORBOARD_PATH			tensorboard路径
CODECARBON_PATH				codecarbon路径
hepj987's avatar
hepj987 committed
68

hepj987's avatar
hepj987 committed
69
70
71
72
73
74
75
76
77
78
N_GPUS         				使用加速卡数量
TP_SIZE  	 				TP数量
PP_SIZE      				PP数量
MICRO_BATCH_SIZE			MICRO_BATCH_SIZE大小
GLOBAL_BATCH_SIZE           GLOBAL_BATCH_SIZE大小
NLAYERS 					模型层数
NHIDDEN						隐藏层维度
NHEADS						多注意力机制头数
SEQ_LEN						最大长度
SAVE_INTERVAL				保存频率
hepj987's avatar
hepj987 committed
79

hepj987's avatar
hepj987 committed
80
81
82
83
--train-samples				训练样本数
--eval-interval				验证频率
--eval-iters				验证iter
```
hepj987's avatar
hepj987 committed
84

hepj987's avatar
hepj987 committed
85
### GPT2模型16B训练(多节点)
hepj987's avatar
hepj987 committed
86

hepj987's avatar
hepj987 committed
87
要求DCU集群Slurm环境正常。
hepj987's avatar
hepj987 committed
88

hepj987's avatar
hepj987 committed
89
推荐用户使用预编译好的python3.7包来快速建立python3虚拟环境,pytorch、apex、torchaudio、colossalai、faiss、mmcv-full 、torchvision、tensorflow需要在[光合开发者社区](https://cancon.hpccube.com:65024/4/main/)下载所需DCU版本安装包
hepj987's avatar
hepj987 committed
90

hepj987's avatar
hepj987 committed
91
92
93
94
```
export PYTHON3_LIB_PATH=/python_lib_path
virtualenv -p /python_bin_path/python3 --system-site-packages venv_gpt2
source env.sh	#进入venv_gpt2虚拟环境
hepj987's avatar
hepj987 committed
95

hepj987's avatar
hepj987 committed
96
97
pip install -r requirements.txt  -i http://pypi.tuna.tsinghua.edu.cn/simple  --trusted-host pypi.tuna.tsinghua.edu.cn
```
hepj987's avatar
hepj987 committed
98

hepj987's avatar
hepj987 committed
99
100
101
102
103
```
rm megatron/arguments.py
cp megatron/arguments.py-nodes megatron/arguments.py
sbatch  run-16B.sh(主要参数在single-16B.sh)
```
hepj987's avatar
hepj987 committed
104
105

```
hepj987's avatar
hepj987 committed
106
107
108
109
110
111
#重要参数
MODEL_NAME 					模型名(自定义)
CHECKPOINT_PATH				模型保存&加载路径
DATA_PATH					数据集路径(转换后的)
TENSORBOARD_PATH			tensorboard路径
CODECARBON_PATH				codecarbon路径
hepj987's avatar
hepj987 committed
112
113


hepj987's avatar
hepj987 committed
114
115
116
117
118
119
120
121
122
TP_SIZE  	 				TP数量
PP_SIZE      				PP数量
MICRO_BATCH_SIZE			MICRO_BATCH_SIZE大小
GLOBAL_BATCH_SIZE           GLOBAL_BATCH_SIZE大小
NLAYERS 					层数
NHIDDEN						隐藏层维度
NHEADS						注意力机制头数
SEQ_LEN						最大长度
SAVE_INTERVAL				保存频率
hepj987's avatar
hepj987 committed
123

hepj987's avatar
hepj987 committed
124
125
126
--train-samples				训练样本数
--eval-interval				验证频率
--eval-iters				验证iter
hepj987's avatar
hepj987 committed
127
128
```

hepj987's avatar
hepj987 committed
129
### 性能和收敛性
hepj987's avatar
hepj987 committed
130

hepj987's avatar
hepj987 committed
131
132
133
|   卡数    | 性能(samples per second) | 收敛性lm loss value | 收敛性lm loss PPL |
| :-------: | :------------------------: | :-----------------: | :---------------: |
| 16 x 4DCU |           2.540            |    6.601086E+00     |   7.358937E+02    |
hepj987's avatar
hepj987 committed
134
135
136



hepj987's avatar
hepj987 committed
137
## GPT2文本生成
hepj987's avatar
hepj987 committed
138

hepj987's avatar
hepj987 committed
139
使用GPT做文本生成时需要对训练好的模型进行转换,转换需要安装0.7.3版本 deepspeed(此工程已包含)
hepj987's avatar
hepj987 committed
140
141

```
hepj987's avatar
hepj987 committed
142
143
pip install deepspeed-0.7.3+unknown-cp37-cp37m-linux_x86_64.whl -i http://pypi.tuna.tsinghua.edu.cn/simple  --trusted-host pypi.tuna.tsinghua.edu.cn
```
hepj987's avatar
hepj987 committed
144

hepj987's avatar
hepj987 committed
145
对deepspeed进行一些修改
hepj987's avatar
hepj987 committed
146
147

```
hepj987's avatar
hepj987 committed
148
149
150
151
152
修改/usr/local/lib/python3.7/site-packages/deepspeed/checkpoint/constants.py
第34行
	ZERO_FILE_PREFIX = 'bf16_' + 'zero_pp_rank_'
改为:
	ZERO_FILE_PREFIX =  'zero_pp_rank_'
hepj987's avatar
hepj987 committed
153

hepj987's avatar
hepj987 committed
154
155
156
157
158
159
修改/usr/local/lib/python3.7/site-packages/deepspeed/ops/op_builder/builder.py
第133行 def assert_torch_info(torch_info):函数
删除下边的版本判断
	install_torch_version = torch_info['version']
	install_cuda_version = torch_info['cuda_version']
	install_hip_version = torch_info['hip_version']
hepj987's avatar
hepj987 committed
160

hepj987's avatar
hepj987 committed
161
修改/usr/local/lib/python3.7/site-packages/deepspeed/runtime/state_dict_factory.py文件
hepj987's avatar
hepj987 committed
162
163
164
165
第177行def check_ckpt_list(self):函数
删除mp_world_size判断
	if 'mp_world_size' in sd.keys():
            assert len(self.ckpt_list) == sd['mp_world_size'], f"checkpoint count {len(self.ckpt_list)} is different from saved mp_world_size {sd['mp_world_size']}"
hepj987's avatar
hepj987 committed
166
167
168

```

hepj987's avatar
hepj987 committed
169
### 转换脚本
hepj987's avatar
hepj987 committed
170
171

```
hepj987's avatar
hepj987 committed
172
sh conver.sh
hepj987's avatar
hepj987 committed
173
174
175
```

```
hepj987's avatar
hepj987 committed
176
177
178
#重要参数
需要将工程路径加入PYTHONPATH
例如:export PYTHONPATH=/home/megatron-deepspeed_dtk22.10:$PYTHONPATH
hepj987's avatar
hepj987 committed
179

hepj987's avatar
hepj987 committed
180
181
182
183
184
CHECKPOINT_PATH  需要转换的模型路径(具体到保存的global_step)
output_folder	 转换后的模型路径
target_tp		 转换后的TP数(需要与训练时保持一致) 	 
target_pp		 转换后的PP数 (设置为1)
```
hepj987's avatar
hepj987 committed
185

hepj987's avatar
hepj987 committed
186
### 无条件文本生成
hepj987's avatar
hepj987 committed
187

hepj987's avatar
hepj987 committed
188
189
190
```
sh run-inf.sh(这里以单节点小模型为例)
```
hepj987's avatar
hepj987 committed
191

hepj987's avatar
hepj987 committed
192
193
194
195
196
197
198
```
#生成时模型各项参数需要与训练时保持一致(TP也需要保持一致)
--micro-batch-size  	micro-batch-size大小
--out-seq-length		输出文本程度
--genfile				生成文本保存位置
--num-samples			生成样本个数
```
hepj987's avatar
hepj987 committed
199
200
201