Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Baichuan-M3_pytorch
Commits
4bc377fc
Commit
4bc377fc
authored
Mar 10, 2026
by
shihm
Browse files
updata readme
parent
79962e84
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
8 additions
and
61 deletions
+8
-61
README.md
README.md
+8
-61
No files found.
README.md
View file @
4bc377fc
...
@@ -94,69 +94,17 @@ print(response)
...
@@ -94,69 +94,17 @@ print(response)
### vllm
### vllm
#### 多机推理
#### 单机推理
加入环境变量
> 请注意:
> 每个节点上的环境变量都写到.sh文件中,保存后各个计算节点分别source`.sh`文件
>
> VLLM_HOST_IP:节点本地通信口ip,尽量选择IB网卡的IP,**避免出现rccl超时问题**
>
> NCCL_SOCKET_IFNAME和 GLOO_SOCKET_IFNAME:节点本地通信网口ip对应的名称
>
> 通信口和ip查询方法:ifconfig
>
> IB口状态查询:ibstat !!!一定要active激活状态才可用,各个节点要保持统一
```
bash
export
ALLREDUCE_STREAM_WITH_COMPUTE
=
1
export
VLLM_HOST_IP
=
x.x.x.x
# 对应计算节点的IP,选择IB口SOCKET_IFNAME对应IP地址
export
NCCL_SOCKET_IFNAME
=
ibxxxx
export
GLOO_SOCKET_IFNAME
=
ibxxxx
export
NCCL_IB_HCA
=
mlx5_0:1
# 环境中的IB网卡名字
unset
NCCL_ALGO
export
NCCL_MIN_NCHANNELS
=
16
export
NCCL_MAX_NCHANNELS
=
16
export
NCCL_NET_GDR_READ
=
1
export
HIP_VISIBLE_DEVICES
=
0,1,2,3,4,5,6,7
export
VLLM_SPEC_DECODE_EAGER
=
1
export
VLLM_MLA_DISABLE
=
0
export
VLLM_USE_FLASH_MLA
=
1
export
VLLM_RPC_TIMEOUT
=
1800000
# K100_AI集群建议额外设置的环境变量:
export
VLLM_ENFORCE_EAGER_BS_THRESHOLD
=
44
# 海光CPU绑定核
export
VLLM_NUMA_BIND
=
1
export
VLLM_RANK0_NUMA
=
0
export
VLLM_RANK1_NUMA
=
1
export
VLLM_RANK2_NUMA
=
2
export
VLLM_RANK3_NUMA
=
3
export
VLLM_RANK4_NUMA
=
4
export
VLLM_RANK5_NUMA
=
5
export
VLLM_RANK6_NUMA
=
6
export
VLLM_RANK7_NUMA
=
7
```
启动RAY集群
x.x.x.x对应第一步的head节点VLLM_HOST_IP
```
bash
# head节点执行
ray start
--head
--node-ip-address
=
x.x.x.x
--port
=
6379
--num-gpus
=
8
--num-cpus
=
32
# worker节点执行
ray start
--address
=
'x.x.x.x:6379'
--num-gpus
=
8
--num-cpus
=
32
```
启动vllm server
启动vllm server
```
bash
```
bash
vllm serve /baichuan-inc/Baichuan-M3-235B
vllm serve /baichuan-inc/Baichuan-M3-235B
--host
x.x.x.x
--port
8000
--reasoning-parser
qwen3
--distributed-executor-backend
ray
--tensor-parallel-size
8
--tensor-parallel-size
16
--trust-remote-code
--gpu-memory-utilization
0.9
--port
8000
--gpu-memory-utilization
0.95
--served-model-name
baichuan-m3
--served-model-name
baichuan-m3
--reasoning-parser
deepseek_r1
```
```
启动完成后可通过以下方式访问:
启动完成后可通过以下方式访问:
```
bash
```
bash
...
@@ -187,11 +135,10 @@ curl http://localhost:8000/v1/chat/completions \
...
@@ -187,11 +135,10 @@ curl http://localhost:8000/v1/chat/completions \
## 预训练权重
## 预训练权重
| 模型名称 | 权重大小 | DCU型号 | 最低卡数需求 |下载地址|
| 模型名称 | 权重大小 | DCU型号 | 最低卡数需求 |下载地址|
|:-----:|:----------:|:----------:|:---------------------:|:----------:|
|:-----:|:----------:|:----------:|:---------------------:|:----------:|
| Baichuan-M3-235B | 235B | BW1000 |
16
|
[
Modelscope
](
https://modelscope.cn/models/baichuan-inc/Baichuan-M3-235B
)
|
| Baichuan-M3-235B | 235B | BW1000 |
8
|
[
Modelscope
](
https://modelscope.cn/models/baichuan-inc/Baichuan-M3-235B
)
|
## 源码仓库及问题反馈
## 源码仓库及问题反馈
-
https://developer.sourcefind.cn/codes/modelzoo/baichuan-m3-235b_vllm
-
https://developer.sourcefind.cn/codes/modelzoo/baichuan-m3-235b_vllm
## 参考资料
## 参考资料
-
https://www.baichuan-ai.com/blog/baichuan-M3
-
https://www.baichuan-ai.com/blog/baichuan-M3
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment