README.md 4.17 KB
Newer Older
dcuai's avatar
dcuai committed
1
# GLM-130B
2

zhouxiang's avatar
zhouxiang committed
3
## 论文
4

zhouxiang's avatar
zhouxiang committed
5
6
7
`GLM: General Language Model Pretraining with Autoregressive Blank Infilling`

- [https://arxiv.org/abs/2103.10360](https://arxiv.org/abs/2103.10360)
8
9
10

## 模型结构

zhouxiang's avatar
zhouxiang committed
11
12
13
14
15
16
17
18
GLM-130B是一个开放的双语(中英)双向密集模型,具有130亿个参数,使用[通用语言模型(GLM)](https://aclanthology.org/2022.acl-long.26)算法进行预训练。GLM是一种基于Transformer的语言模型,以自回归空白填充为训练目标。

<div align="center">
<img src="doc/transformers.jpg" width="300" height="400">
</div>



19

zhouxiang's avatar
zhouxiang committed
20
以下是GLM130B的主要网络参数配置:
21

zhouxiang's avatar
zhouxiang committed
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

| 模型名称 | 隐含层维度 | 层数 | 头数 | 词表大小 | 位置编码 | 最大序列长度 |
| -------- | ---------- | ---- | ---- | -------- | -------- | ------------ |
| GLM130B  | 12288      | 70   | 96   | 150528   | RoPE     | 2048         |

## 算法原理

GLM是一种基于Transformer的语言模型,以自回归空白填充为训练目标, 同时具备自回归和自编码能力。

<div align="center">
<img src="doc/GLM.png" width="550" height="200">
</div>

本项目主要针对GLM-130B模型在8卡32G显存的DCU平台利用fastertransformer进行快速推理。

## 环境配置

### 环境准备
40
41
42
43

在光源可拉取推理的docker镜像,拉取方式如下:

```
zhouxiang's avatar
zhouxiang committed
44
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:glm-ft-v1.1
45
46
```

zhouxiang's avatar
zhouxiang committed
47
48
49
50
51
52
53
### 容器启动

模型推理容器启动命令参考如下,用户根据需要修改:

```
# <container_name> 自定义容器名
# <project_path> 当前工程所在路径
zhouxiang's avatar
zhouxiang committed
54
docker run -it --name=<container_name> -v <project_path>:/work -w /work --device=/dev/kfd --device=/dev/dri --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --shm-size=16G --group-add 39 image.sourcefind.cn:5000/dcu/admin/base/custom:glm-ft-v1.1 /bin/bash
zhouxiang's avatar
zhouxiang committed
55
56
```

57
58
59
60
61
62
63
64
65
66
67
68
69
70
### 编译方法

```
mkdir build
cd build
cmake -DSM=62 -DCMAKE_BUILD_TYPE=Release -DBUILD_MULTI_GPU=ON -DCMAKE_CXX_COMPILER=nvcc ..
make

#编译到100%时如果“Linking CUDA executable ../../bin/test_logprob_kernels”报错则执行如下命令
cd tests/unittests
nvcc CMakeFiles/test_logprob_kernels.dir/test_logprob_kernels.cu.o -o ../../bin/test_logprob_kernels   -L/usr/local/mpi/lib  -Wl,-rpath,/usr/local/mpi/lib -lcublas -lcublasLt -lcudart ../../lib/liblogprob_kernels.a ../../lib/libmemory_utils.a  -L"/opt/dtk-23.04/cuda/targets/x86_64-linux/lib/stubs" -L"/opt/dtk-23.04/cuda/targets/x86_64-linux/lib" -lcudart -lrt -lpthread -ldl

```

zhouxiang's avatar
zhouxiang committed
71
72
73
74
## 数据集



zhouxiang's avatar
zhouxiang committed
75
76
77
## 推理

### 原版模型下载与转换
78
79
80
81
82
83
84
85
86
87
88

[这里](https://docs.google.com/forms/d/e/1FAIpQLSehr5Dh_i3TwACmFFi8QEgIVNYGmSPwV0GueIcsUev0NEfUug/viewform?usp=sf_link)下载GLM-130B的模型,确保所有60个块都已完全下载,然后使用以下命令将它们合并到单个存档文件中并解压缩它:

```
cat glm-130b-sat.tar.part_* > glm-130b-sat.tar
tar xvf glm-130b-sat.tar
```

模型转换

```
zhouxiang's avatar
zhouxiang committed
89
cd /work/build
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
python ../examples/cpp/glm/glm_weight_convt.py -i /home/glm-130b-sat/49300/ -o  /home/glm-130b-sat-ft-model/
```

### 运行示例程序

生成gemm_config.in文件

```
# ./bin/gpt_gemm <batch_size> <beam_width> <max_input_len> <head_number> <size_per_head> <inter_size> <vocab_size> <data_type> <tensor_para_size>
./bin/gpt_gemm 1 1 128 96 128 49152 150528 1 8
```

修改../examples/cpp/glm/glm_config.ini配置文件

执行glm_example执行命令

```
mpirun -n 8 --allow-run-as-root ./bin/glm_example
```

zhouxiang's avatar
zhouxiang committed
110
111
112
113
114
115
此example程序会读取../examples/cpp/glm/start_ids.csv文件中的id作为输入token,生成的结果tokenid会保存在./out内,可以执行如下命令进行解析out结果:

```
python ../examples/pytorch/glm/glm_tokenize.py
```

zhouxiang's avatar
zhouxiang committed
116
117
118
119
120
## Result

<div align="center">
<img src="doc/result.png">
</div>
zhouxiang's avatar
zhouxiang committed
121

zhouxiang's avatar
zhouxiang committed
122
## 精度
zhouxiang's avatar
zhouxiang committed
123
124


zhouxiang's avatar
zhouxiang committed
125

zhouxiang's avatar
zhouxiang committed
126
127
128
129
## 应用场景

### 算法类别

zhouxiang's avatar
更新  
zhouxiang committed
130
`对话问答`
zhouxiang's avatar
zhouxiang committed
131
132

### 热点应用行业
zhouxiang's avatar
zhouxiang committed
133

zhouxiang's avatar
更新  
zhouxiang committed
134
`医疗,科研,金融,教育`
zhouxiang's avatar
zhouxiang committed
135

136
137
## 源码仓库及问题反馈

chenzk's avatar
chenzk committed
138
https://developer.sourcefind.cn/codes/modelzoo/glm130b_fastertransformer
139

zhouxiang's avatar
zhouxiang committed
140
## 参考资料
141
142
143

[THUDM/GLM-130B: GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023) (github.com)](https://github.com/THUDM/GLM-130B)

dcuai's avatar
dcuai committed
144
[THUDM/FasterTransformer: Transformer related optimization, including BERT, GPT (github.com)](https://github.com/THUDM/FasterTransformer)