README.md 9.02 KB
Newer Older
chenych's avatar
chenych committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
## GLM-5.1
## 论文
[GLM-5.1: Towards Long-Horizon Tasks](https://z.ai/blog/glm-5.1)

## 模型简介
GLM-5.1 是智谱AI面向智能体工程的下一代旗舰模型,其代码能力相比前代显著增强。它在 SWE-Bench Pro 上达到了当前最优水平,并在 NL2Repo(代码仓库生成)和 Terminal-Bench 2.0(真实终端任务)上大幅领先于 GLM-5。
<div align=center>
    <img src="./doc/bench_51.png"/>
</div>
但最具意义的飞跃并不仅限于首次通过的表现。以往的模型(包括 GLM-5)往往过早耗尽其能力储备:它们会快速应用熟悉的技术以获得初步成果,随后便陷入停滞。即使给予更多时间也无济于事。

相比之下,GLM-5.1 被设计为在更长的时间跨度内持续高效地处理智能体任务。该模型在面对模糊问题时具备更佳的判断力,并能在更长时间的会话中保持高效产出。它能够分解复杂问题、运行实验、解读结果,并精准识别障碍点。通过不断回顾自身推理过程并反复迭代调整策略,GLM-5.1 能在数百轮交互和数千次工具调用中持续优化。运行时间越长,结果越好。

## 环境依赖
| 软件 |   版本    |
| :------: |:-------:|
|     DTK      | 26.04 |
|    python    | 3.10.12 |
| transformers |  5.2.0  |
chenych's avatar
chenych committed
20
|    torch     |  2.9.0 |
chenych's avatar
chenych committed
21
22
|     vLLM     |  0.15.1  |
|    SGLang    |  0.5.10rc0 |
chenych's avatar
chenych committed
23

chenych's avatar
chenych committed
24
25
26
当前仅支持镜像:
- **vLLM推理请使用:** harbor.sourcefind.cn:5443/dcu/admin/base/custom:vllm015-ubuntu22.04-dtk26.04-glm5-0408
- **SGLang推理请使用:** harbor.sourcefind.cn:5443/dcu/admin/base/custom:sglang-0.5.10-glm5-0416
chenych's avatar
chenych committed
27
28

- 挂载地址`-v`根据实际模型情况修改
chenych's avatar
chenych committed
29
- 下面以`vLLM`镜像启动示例,如果使用`SGLang`,请对应替换镜像地址
chenych's avatar
chenych committed
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56

```bash
docker run -it \
    --shm-size 256g \
    --network=host \
    --name glm-5.1 \
    --privileged \
    --device=/dev/kfd \
    --device=/dev/dri \
    --device=/dev/mkfd \
    --group-add video \
    --cap-add=SYS_PTRACE \
    --security-opt seccomp=unconfined \
    -u root \
    -v /opt/hyhal/:/opt/hyhal/:ro \
    -v /path/your_code_data/:/path/your_code_data/ \
    harbor.sourcefind.cn:5443/dcu/admin/base/custom:vllm015-ubuntu22.04-dtk26.04-glm5-0408 bash
```
更多镜像可前往[光源](https://sourcefind.cn/#/service-list)下载使用。

## 数据集
`暂无`

## 训练
`暂无`

## 推理
chenych's avatar
chenych committed
57
58
> 如果出现`ImportError: librocm_smi64.so.2: cannot open shaned object file: No such file or directory`报错,系机器hyhal版本较低所致,请进行升级。

chenych's avatar
chenych committed
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
### SGLang
1. 加入环境变量
```bash
export SGLANG_USE_LIGHTOP=1
export HIP_GRAPH_USE_CMD_CACHE=0
export SGLANG_ROCM_USE_AITER_MOE=0
```

2. 启动服务
```bash
model_path=ZhipuAI/GLM-5.1-FP8

option="--numa-node 0 0 0 0 1 1 1 1 "
option+=" --disable-radix-cache "
option+=" --chunked-prefill-size 16384"
option+=" --page-size 64 "
option+=" --nsa-prefill-backend flashmla_auto --nsa-decode-backend flashmla_kv "
# option+=" --quantization slimquant_marlin "

python3 -m sglang.launch_server --model-path "${model_path}" ${option} \
                                --trust-remote-code \
                                --reasoning-parser glm45 \
                                --tool-call-parser glm47 \
                                --kv-cache-dtype fp8_e4m3 \
                                --dtype bfloat16 \
                                --mem-fraction-static 0.925 \
                                --host 0.0.0.0 \
                                --port 8001 \
                                --tp-size 8 \
                                --context-length 32768 \
                                --served-model-name glm-5.1-fp8
```

3. 启动完成后可通过以下方式访问:
```bash
curl http://localhost:8001/v1/chat/completions   \
    -H "Content-Type: application/json"  \
    -d '{
        "model": "glm-5.1-fp8",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "What is 15% of 240?"}
        ],
        "max_tokens": 2048,
        "temperature": 0.7,
        "chat_template_kwargs": {"enable_thinking": false}
    }'
```

chenych's avatar
chenych committed
108
### vLLM
chenych's avatar
chenych committed
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
#### 单机推理
1. 加入环境变量
```bash
# 环境变量
rm -rf ~/.cache
rm -rf ~/.triton
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export ALLREDUCE_STREAM_WITH_COMPUTE=1
export NCCL_MIN_NCHANNELS=16
export NCCL_MAX_NCHANNELS=16
export Allgather_Base_STREAM_WITH_COMPUTE=1
export SENDRECV_STREAM_WITH_COMPUTE=1
export HIP_KERNEL_EVENT_SYSTENFENCE=1
export VLLM_RPC_TIMEOUT=1800000
export VLLM_USE_PD_SPLIT=1
export VLLM_USE_PIECEWISE=1
export VLLM_REJECT_SAMPLE_OPT=1
export USE_FUSED_RMS_QUANT=0
export USE_FUSED_SILU_MUL_QUANT=1
export VLLM_USE_GLOBAL_CACHE13=1
export VLLM_FUSED_MOE_CHUNK_SIZE=16384
export VLLM_CUSTOM_CACHE=1
export VLLM_USE_OPT_CAT=1
export VLLM_USE_FUSED_FILL_RMS_CAT=1
export VLLM_USE_LIGHTOP_MOE_SUM_MUL_ADD=0
export VLLM_USE_LIGHTOP_RMS_ROPE_CONCAT=0
export VLLM_USE_FLASH_MLA=1
export VLLM_DISABLE_DSA=0
export USE_LIGHTOP_TOPK=1
export USE_LIGHTOP_PER_TOKEN_GROUP_QUANT_FP8=1
export USE_LIGHTOP_CONVERT_REQ_INDEX_TO_GLOBAL_INDEX=1
```

2. 启动vllm serve
```bash
vllm serve ZhipuAI/GLM-5.1-FP8 \
    --gpu-memory-utilization 0.925 \
    --port 8001 \
    --tensor-parallel-size 8 \
    --tool-call-parser glm47 \
    --reasoning-parser glm45 \
    --enable-auto-tool-choice \
    --kv-cache-dtype fp8_ds_mla \
    --served-model-name glm-5.1-fp8 \
    --disable-log-requests \
    --compilation-config '{"pass_config": {"fuse_act_quant": false}}'
```

3. 启动完成后可通过以下方式访问:
```bash
curl http://localhost:8001/v1/chat/completions   \
    -H "Content-Type: application/json"  \
    -d '{
        "model": "glm-5.1-fp8",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "Summarize GLM-5 in one sentence."}
        ],
        "max_tokens": 4096,
        "temperature": 0.7,
        "chat_template_kwargs": {"enable_thinking": false}
    }'
```

#### 多机推理
1. 加入环境变量
> 请注意:
> 每个节点上的环境变量都写到.sh文件中,保存后各个计算节点分别source`.sh`文件
>
> VLLM_HOST_IP:节点本地通信口ip,尽量选择IB网卡的IP,**避免出现rccl超时问题**
>
> NCCL_SOCKET_IFNAME和 GLOO_SOCKET_IFNAME:节点本地通信网口ip对应的名称
>
> 通信口和ip查询方法:ifconfig
>
> IB口状态查询:ibstat  !!!一定要active激活状态才可用,各个节点要保持统一

```bash
export ALLREDUCE_STREAM_WITH_COMPUTE=1
export VLLM_HOST_IP=x.x.x.x # 对应计算节点的IP,选择IB口SOCKET_IFNAME对应IP地址
export NCCL_SOCKET_IFNAME=ibxxxx
export GLOO_SOCKET_IFNAME=ibxxxx
export NCCL_IB_HCA=mlx5_0:1 # 环境中的IB网卡名字
unset NCCL_ALGO
export NCCL_MIN_NCHANNELS=16
export NCCL_MAX_NCHANNELS=16
export NCCL_NET_GDR_READ=1
export HIP_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
export VLLM_SPEC_DECODE_EAGER=1
export VLLM_MLA_DISABLE=0
export VLLM_USE_FLASH_MLA=1
export VLLM_RPC_TIMEOUT=1800000

# 海光CPU绑定核
export VLLM_NUMA_BIND=1
export VLLM_RANK0_NUMA=0
export VLLM_RANK1_NUMA=1
export VLLM_RANK2_NUMA=2
export VLLM_RANK3_NUMA=3
export VLLM_RANK4_NUMA=4
export VLLM_RANK5_NUMA=5
export VLLM_RANK6_NUMA=6
export VLLM_RANK7_NUMA=7
```

2. 启动RAY集群
> x.x.x.x 对应第一步 VLLM_HOST_IP

```bash
# head节点执行
ray start --head --node-ip-address=x.x.x.x --port=6379 --num-gpus=8 --num-cpus=32
# worker节点执行
ray start --address='x.x.x.x:6379' --num-gpus=8 --num-cpus=32
```

3. 启动vllm serve
```bash
vllm serve ZhipuAI/GLM-5.1 \
    --port 8001 \
    --trust-remote-code \
chenych's avatar
chenych committed
229
    --tensor-parallel-size 32 \ # BW1000是32, BW1100是16
chenych's avatar
chenych committed
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
    --gpu-memory-utilization 0.85 \
    --distributed-executor-backend ray \
    --dtype bfloat16 \
    --max-model-len 32768 \
    --tool-call-parser glm47 \
    --reasoning-parser glm45 \
    --enable-auto-tool-choice \
    --served-model-name glm-5.1
```

4. 启动完成后可通过以下方式访问:
```bash
curl http://localhost:8001/v1/chat/completions   \
    -H "Content-Type: application/json"  \
    -d '{
        "model": "glm-5.1",
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "Summarize GLM-5 in one sentence."}
        ],
        "max_tokens": 4096,
        "temperature": 1
    }'
```

## 效果展示
<div align=center>
    <img src="./doc/result-DCU.jpg"/>
</div>

### 精度
`DCU与GPU精度一致,推理框架:vllm。`

## 预训练权重
| 模型名称  | 权重大小  | DCU型号  | 最低卡数需求 |下载地址|
|:-----:|:----------:|:----------:|:---------------------:|:----------:|
chenych's avatar
chenych committed
266
267
| GLM-5.1 | 754B | BW1000  | 32 | [ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-5.1) |
| GLM-5.1 | 754B | BW1100  | 16 | [ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-5.1) |
chenych's avatar
chenych committed
268
269
270
| GLM-5.1-FP8 | 754B | BW1100  | 8 | [ModelScope](https://modelscope.cn/models/ZhipuAI/GLM-5.1-FP8) |

## 源码仓库及问题反馈
chenych's avatar
chenych committed
271
- https://developer.sourcefind.cn/codes/modelzoo/glm-5.1
chenych's avatar
chenych committed
272
273
274

## 参考资料
- https://github.com/zai-org/GLM-5