"sgl-kernel/include/sgl_kernel_ops.h" did not exist on "a7000a765041cf870bb9964ee533dd0fb7cebcdf"
README.md 5.58 KB
Newer Older
hepj987's avatar
hepj987 committed
1
# Generative Pre-Training2(GPT2)
hepj987's avatar
hepj987 committed
2

hepj987's avatar
hepj987 committed
3
### 模型介绍
hepj987's avatar
hepj987 committed
4
5

```
hepj987's avatar
hepj987 committed
6
GPT2模型:第二代生成式预训练模型(Generative Pre-Training2)。
hepj987's avatar
hepj987 committed
7
8
```

hepj987's avatar
hepj987 committed
9
### 模型结构
hepj987's avatar
hepj987 committed
10
11

```
hepj987's avatar
hepj987 committed
12
GPT2使用 Transformer 的 Decoder 结构,并对 Transformer Decoder 进行了一些改动,并通过Megatron和deepspeed进行分布式运行
hepj987's avatar
hepj987 committed
13
14
```

hepj987's avatar
hepj987 committed
15
### 数据集
hepj987's avatar
hepj987 committed
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

```
wget https://huggingface.co/bigscience/misc-test-data/resolve/main/stas/oscar-1GB.jsonl.xz
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt
xz -d oscar-1GB.jsonl.xz
python tools/preprocess_data.py \
    --input oscar-1GB.jsonl \
    --output-prefix my-gpt2 \
    --vocab gpt2-vocab.json \
    --dataset-impl mmap \
    --tokenizer-type GPT2BPETokenizer \
    --merge-file gpt2-merges.txt \
    --append-eod \
    --workers 8
```

hepj987's avatar
hepj987 committed
33
## GPT2预训练
hepj987's avatar
hepj987 committed
34

hepj987's avatar
hepj987 committed
35
### 环境配置
hepj987's avatar
hepj987 committed
36

hepj987's avatar
hepj987 committed
37
推荐使用docker方式运行,提供[光源](https://www.sourcefind.cn/#/service-details)拉取的docker镜像:
hepj987's avatar
hepj987 committed
38
39

```
hepj987's avatar
hepj987 committed
40
docker pull image.sourcefind.cn:5000/dcu/admin/base/vscode-pytorch:1.10.0-centos7.6-dtk-22.10-py37-latest
hepj987's avatar
hepj987 committed
41
42
```

hepj987's avatar
hepj987 committed
43
进入docker
hepj987's avatar
hepj987 committed
44
45

```
hepj987's avatar
hepj987 committed
46
pip install -r requirements.txt  -i http://pypi.tuna.tsinghua.edu.cn/simple  --trusted-host pypi.tuna.tsinghua.edu.cn
hepj987's avatar
hepj987 committed
47
48
```

hepj987's avatar
hepj987 committed
49
### 训练(单节点)
hepj987's avatar
hepj987 committed
50
51

```
hepj987's avatar
hepj987 committed
52
53
54
rm megatron/arguments.py
cp megatron/arguments.py-one_node megatron/arguments.py
sh run-train.sh(基于单节点四卡)
hepj987's avatar
hepj987 committed
55
56
```

hepj987's avatar
hepj987 committed
57
58
59
60
61
62
63
```
#重要参数
MODEL_NAME 					模型名(自定义)
CHECKPOINT_PATH				模型保存&加载路径
DATA_PATH					数据集路径(转换后的)
TENSORBOARD_PATH			tensorboard路径
CODECARBON_PATH				codecarbon路径
hepj987's avatar
hepj987 committed
64

hepj987's avatar
hepj987 committed
65
66
67
68
69
70
71
72
73
74
N_GPUS         				使用加速卡数量
TP_SIZE  	 				TP数量
PP_SIZE      				PP数量
MICRO_BATCH_SIZE			MICRO_BATCH_SIZE大小
GLOBAL_BATCH_SIZE           GLOBAL_BATCH_SIZE大小
NLAYERS 					模型层数
NHIDDEN						隐藏层维度
NHEADS						多注意力机制头数
SEQ_LEN						最大长度
SAVE_INTERVAL				保存频率
hepj987's avatar
hepj987 committed
75

hepj987's avatar
hepj987 committed
76
77
78
79
--train-samples				训练样本数
--eval-interval				验证频率
--eval-iters				验证iter
```
hepj987's avatar
hepj987 committed
80

hepj987's avatar
hepj987 committed
81
### GPT2模型16B训练(多节点)
hepj987's avatar
hepj987 committed
82

hepj987's avatar
hepj987 committed
83
要求DCU集群Slurm环境正常。
hepj987's avatar
hepj987 committed
84

hepj987's avatar
hepj987 committed
85
推荐用户使用预编译好的python3.7包来快速建立python3虚拟环境,pytorch、apex、torchaudio、colossalai、faiss、mmcv-full 、torchvision、tensorflow需要在[光合开发者社区](https://cancon.hpccube.com:65024/4/main/)下载所需DCU版本安装包
hepj987's avatar
hepj987 committed
86

hepj987's avatar
hepj987 committed
87
88
89
90
```
export PYTHON3_LIB_PATH=/python_lib_path
virtualenv -p /python_bin_path/python3 --system-site-packages venv_gpt2
source env.sh	#进入venv_gpt2虚拟环境
hepj987's avatar
hepj987 committed
91

hepj987's avatar
hepj987 committed
92
93
pip install -r requirements.txt  -i http://pypi.tuna.tsinghua.edu.cn/simple  --trusted-host pypi.tuna.tsinghua.edu.cn
```
hepj987's avatar
hepj987 committed
94

hepj987's avatar
hepj987 committed
95
96
97
98
99
```
rm megatron/arguments.py
cp megatron/arguments.py-nodes megatron/arguments.py
sbatch  run-16B.sh(主要参数在single-16B.sh)
```
hepj987's avatar
hepj987 committed
100
101

```
hepj987's avatar
hepj987 committed
102
103
104
105
106
107
#重要参数
MODEL_NAME 					模型名(自定义)
CHECKPOINT_PATH				模型保存&加载路径
DATA_PATH					数据集路径(转换后的)
TENSORBOARD_PATH			tensorboard路径
CODECARBON_PATH				codecarbon路径
hepj987's avatar
hepj987 committed
108
109


hepj987's avatar
hepj987 committed
110
111
112
113
114
115
116
117
118
TP_SIZE  	 				TP数量
PP_SIZE      				PP数量
MICRO_BATCH_SIZE			MICRO_BATCH_SIZE大小
GLOBAL_BATCH_SIZE           GLOBAL_BATCH_SIZE大小
NLAYERS 					层数
NHIDDEN						隐藏层维度
NHEADS						注意力机制头数
SEQ_LEN						最大长度
SAVE_INTERVAL				保存频率
hepj987's avatar
hepj987 committed
119

hepj987's avatar
hepj987 committed
120
121
122
--train-samples				训练样本数
--eval-interval				验证频率
--eval-iters				验证iter
hepj987's avatar
hepj987 committed
123
124
```

hepj987's avatar
hepj987 committed
125
### 性能和收敛性
hepj987's avatar
hepj987 committed
126

hepj987's avatar
hepj987 committed
127
128
129
|   卡数    | 性能(samples per second) | 收敛性lm loss value | 收敛性lm loss PPL |
| :-------: | :------------------------: | :-----------------: | :---------------: |
| 16 x 4DCU |           2.540            |    6.601086E+00     |   7.358937E+02    |
hepj987's avatar
hepj987 committed
130
131
132



hepj987's avatar
hepj987 committed
133
## GPT2文本生成
hepj987's avatar
hepj987 committed
134

hepj987's avatar
hepj987 committed
135
使用GPT做文本生成时需要对训练好的模型进行转换,转换需要安装0.7.3版本 deepspeed(此工程已包含)
hepj987's avatar
hepj987 committed
136
137

```
hepj987's avatar
hepj987 committed
138
139
pip install deepspeed-0.7.3+unknown-cp37-cp37m-linux_x86_64.whl -i http://pypi.tuna.tsinghua.edu.cn/simple  --trusted-host pypi.tuna.tsinghua.edu.cn
```
hepj987's avatar
hepj987 committed
140

hepj987's avatar
hepj987 committed
141
对deepspeed进行一些修改
hepj987's avatar
hepj987 committed
142
143

```
hepj987's avatar
hepj987 committed
144
145
146
147
148
修改/usr/local/lib/python3.7/site-packages/deepspeed/checkpoint/constants.py
第34行
	ZERO_FILE_PREFIX = 'bf16_' + 'zero_pp_rank_'
改为:
	ZERO_FILE_PREFIX =  'zero_pp_rank_'
hepj987's avatar
hepj987 committed
149

hepj987's avatar
hepj987 committed
150
151
152
153
154
155
修改/usr/local/lib/python3.7/site-packages/deepspeed/ops/op_builder/builder.py
第133行 def assert_torch_info(torch_info):函数
删除下边的版本判断
	install_torch_version = torch_info['version']
	install_cuda_version = torch_info['cuda_version']
	install_hip_version = torch_info['hip_version']
hepj987's avatar
hepj987 committed
156

hepj987's avatar
hepj987 committed
157
修改/usr/local/lib/python3.7/site-packages/deepspeed/runtime/state_dict_factory.py文件
hepj987's avatar
hepj987 committed
158
159
160
161
第177行def check_ckpt_list(self):函数
删除mp_world_size判断
	if 'mp_world_size' in sd.keys():
            assert len(self.ckpt_list) == sd['mp_world_size'], f"checkpoint count {len(self.ckpt_list)} is different from saved mp_world_size {sd['mp_world_size']}"
hepj987's avatar
hepj987 committed
162
163
164

```

hepj987's avatar
hepj987 committed
165
### 转换脚本
hepj987's avatar
hepj987 committed
166
167

```
hepj987's avatar
hepj987 committed
168
sh conver.sh
hepj987's avatar
hepj987 committed
169
170
171
```

```
hepj987's avatar
hepj987 committed
172
173
174
#重要参数
需要将工程路径加入PYTHONPATH
例如:export PYTHONPATH=/home/megatron-deepspeed_dtk22.10:$PYTHONPATH
hepj987's avatar
hepj987 committed
175

hepj987's avatar
hepj987 committed
176
177
178
179
180
CHECKPOINT_PATH  需要转换的模型路径(具体到保存的global_step)
output_folder	 转换后的模型路径
target_tp		 转换后的TP数(需要与训练时保持一致) 	 
target_pp		 转换后的PP数 (设置为1)
```
hepj987's avatar
hepj987 committed
181

hepj987's avatar
hepj987 committed
182
### 无条件文本生成
hepj987's avatar
hepj987 committed
183

hepj987's avatar
hepj987 committed
184
185
186
```
sh run-inf.sh(这里以单节点小模型为例)
```
hepj987's avatar
hepj987 committed
187

hepj987's avatar
hepj987 committed
188
189
190
191
192
193
194
```
#生成时模型各项参数需要与训练时保持一致(TP也需要保持一致)
--micro-batch-size  	micro-batch-size大小
--out-seq-length		输出文本程度
--genfile				生成文本保存位置
--num-samples			生成样本个数
```
hepj987's avatar
hepj987 committed
195
196
197