Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Mixtral_vllm
Commits
858a6317
"vscode:/vscode.git/clone" did not exist on "e4d68afcf00869a5467f101d176fecc3cd97b7b8"
Commit
858a6317
authored
Dec 20, 2024
by
laibao
Browse files
No commit message
No commit message
parent
1811370a
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
19 additions
and
21 deletions
+19
-21
README.md
README.md
+19
-21
No files found.
README.md
View file @
858a6317
...
...
@@ -42,7 +42,7 @@ docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.3.0-py3.10-dtk24.0
# <Host Path>主机端路径
# <Container Path>容器映射路径
# 若要在主机端和容器端映射端口需要删除--network host参数
docker run -it --name
chatglm
_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal -v <Host Path>:<Container Path> <Image ID> /bin/bash
docker run -it --name
mixtral
_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal -v <Host Path>:<Container Path> <Image ID> /bin/bash
```
`Tips:若在K100/Z100L上使用,使用定制镜像docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:vllm0.5.0-dtk24.04.1-ubuntu20.04-py310-zk-v1,K100/Z100L不支持awq量化`
...
...
@@ -52,14 +52,14 @@ docker run -it --name chatglm_vllm --privileged --shm-size=64G --device=/dev/kf
```
# <Host Path>主机端路径
# <Container Path>容器映射路径
docker build -t
chatglm
:latest .
docker run -it --name
chatglm
_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal:ro -v <Host Path>:<Container Path> llama:latest /bin/bash
docker build -t
mixtral
:latest .
docker run -it --name
mixtral
_vllm --privileged --shm-size=64G --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v /opt/hyhal:/opt/hyhal:ro -v <Host Path>:<Container Path> llama:latest /bin/bash
```
### Anaconda(方法三)
```
conda create -n
chatglm
_vllm python=3.10
conda create -n
mixtral
_vllm python=3.10
```
关于本项目DCU显卡所需的特殊深度学习库可从
[
光合
](
https://developer.hpccube.com/tool/
)
开发者社区下载安装。
...
...
@@ -82,11 +82,9 @@ conda create -n chatglm_vllm python=3.10
### 模型下载
| 基座模型 | 长文本模型 |
| -------------------------------------------------------------------- | ------------------------------------------------------------------------------ |
|
[
chatglm2-6b
](
http://113.200.138.88:18080/aimodels/chatglm2-6b
)
|
[
chatglm2-6b-32k
](
http://113.200.138.88:18080/aimodels/thudm/chatglm2-6b-32k.git
)
|
|
[
chatglm3-6b
](
http://113.200.138.88:18080/aimodels/chatglm3-6b
)
|
[
chatglm3-6b-32k
](
http://113.200.138.88:18080/aimodels/chatglm3-6b-32k
)
|
|
[
glm-4-9b-chat
](
http://113.200.138.88:18080/aimodels/glm-4-9b-chat.git
)
| |
| 基座模型 | |
| ------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------- |
|
[
Mixtral-8x7B-Instruct-v0.1
](
http://113.200.138.88:18080/aimodels/Mixtral-8x7B-Instruct-v0.1
)
|
[
Mixtral-8x22B-Instruct-v0.1
](
https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1
)
|
### 离线批量推理
...
...
@@ -102,7 +100,7 @@ python examples/offline_inference.py
1、指定输入输出
```
bash
python benchmarks/benchmark_throughput.py
--num-prompts
1
--input-len
32
--output-len
128
--model
THUDM/glm-4-9b-chat
-tp
1
--trust-remote-code
--enforce-eager
--dtype
float16
python benchmarks/benchmark_throughput.py
--num-prompts
1
--input-len
32
--output-len
128
--model
mixtral/Mixtral-8x7B-Instruct-v0.1
-tp
1
--trust-remote-code
--enforce-eager
--dtype
float16
```
其中
`--num-prompts`
是batch数,
`--input-len`
是输入seqlen,
`--output-len`
是输出token长度,
`--model`
为模型路径,
`-tp`
为使用卡数,
`dtype="float16"`
为推理数据类型,如果模型权重是bfloat16,需要修改为float16推理。若指定
`--output-len 1`
即为首字延迟。
`-q gptq`
为使用gptq量化模型进行推理。
...
...
@@ -115,7 +113,7 @@ wget http://113.200.138.88:18080/aidatasets/vllm_data/-/raw/main/ShareGPT_V3_unf
```
```
bash
python benchmarks/benchmark_throughput.py
--num-prompts
1
--model
THUDM/glm-4-9b-chat
--dataset
ShareGPT_V3_unfiltered_cleaned_split.json
-tp
1
--trust-remote-code
--enforce-eager
--dtype
float16
python benchmarks/benchmark_throughput.py
--num-prompts
1
--model
mixtral/Mixtral-8x7B-Instruct-v0.1
--dataset
ShareGPT_V3_unfiltered_cleaned_split.json
-tp
1
--trust-remote-code
--enforce-eager
--dtype
float16
```
其中
`--num-prompts`
是batch数,
`--model`
为模型路径,
`--dataset`
为使用的数据集,
`-tp`
为使用卡数,
`dtype="float16"`
为推理数据类型,如果模型权重是bfloat16,需要修改为float16推理。
`-q gptq`
为使用gptq量化模型进行推理。
...
...
@@ -125,13 +123,13 @@ python benchmarks/benchmark_throughput.py --num-prompts 1 --model THUDM/glm-4-9b
1、启动服务端:
```
bash
python
-m
vllm.entrypoints.openai.api_server
--model
THUDM/glm-4-9b-chat
--dtype
float16
--enforce-eager
-tp
1
python
-m
vllm.entrypoints.openai.api_server
--model
mixtral/Mixtral-8x7B-Instruct-v0.1
--dtype
float16
--enforce-eager
-tp
1
```
2、启动客户端:
```
bash
python benchmarks/benchmark_serving.py
--model
THUDM/glm-4-9b-chat
--dataset
ShareGPT_V3_unfiltered_cleaned_split.json
--num-prompts
1
--trust-remote-code
python benchmarks/benchmark_serving.py
--model
mixtral/Mixtral-8x7B-Instruct-v0.1
--dataset
ShareGPT_V3_unfiltered_cleaned_split.json
--num-prompts
1
--trust-remote-code
```
参数同使用数据集,离线批量推理性能测试,具体参考[benchmarks/benchmark_serving.py](benchmarks/benchmark_serving.py)
...
...
@@ -141,7 +139,7 @@ python benchmarks/benchmark_serving.py --model THUDM/glm-4-9b-chat --dataset Sha
启动服务:
```
bash
vllm serve
THUDM/glm-4-9b-chat
--enforce-eager
--dtype
float16
--trust-remote-code
--chat-template
template_chatglm2.jinja
--port
8000
vllm serve
mixtral/Mixtral-8x7B-Instruct-v0.1
--enforce-eager
--dtype
float16
--trust-remote-code
--port
8000
```
这里serve之后 为加载模型路径,
`--dtype`
为数据类型:float16,默认情况使用tokenizer中的预定义聊天模板,
`--chat-template`
可以添加新模板覆盖默认模板,
`-q gptq`
为使用gptq量化模型进行推理。
...
...
@@ -158,7 +156,7 @@ curl http://localhost:8000/v1/models
curl http://localhost:8000/v1/completions
\
-H
"Content-Type: application/json"
\
-d
'{
"model": "
THUDM/glm-4-9b-chat
",
"model": "
mixtral/Mixtral-8x7B-Instruct-v0.1
",
"prompt": "晚上睡不着怎么办",
"max_tokens": 7,
"temperature": 0
...
...
@@ -173,7 +171,7 @@ curl http://localhost:8000/v1/completions \
curl http://localhost:8000/v1/chat/completions
\
-H
"Content-Type: application/json"
\
-d
'{
"model": "
THUDM/glm-4-9b-chat
",
"model": "
mixtral/Mixtral-8x7B-Instruct-v0.1
",
"messages": [
{"role": "system", "content": "晚上睡不着怎么办"},
{"role": "user", "content": "晚上睡不着怎么办"}
...
...
@@ -196,7 +194,7 @@ pip install gradio
2.1 启动gradio服务,根据提示操作
```
python gradio_openai_chatbot_webserver.py --model "
THUDM/glm-4-9b-chat
" --model-url http://localhost:8000/v1 --temp 0.8 --stop-token-ids ""
python gradio_openai_chatbot_webserver.py --model "
mixtral/Mixtral-8x7B-Instruct-v0.1
" --model-url http://localhost:8000/v1 --temp 0.8 --stop-token-ids ""
```
2.2 更改文件权限
...
...
@@ -216,13 +214,13 @@ ssh -L 8000:计算节点IP:8000 -L 8001:计算节点IP:8001 用户名@登录节
3.
启动OpenAI兼容服务
```
vllm serve
THUDM/glm-4-9b-chat
--enforce-eager --dtype float16 --trust-remote-code
--chat-template template_chatglm2.jinja
--port 8000
vllm serve
mixtral/Mixtral-8x7B-Instruct-v0.1
--enforce-eager --dtype float16 --trust-remote-code --port 8000
```
4.
启动gradio服务
```
python gradio_openai_chatbot_webserver.py --model "
THUDM/glm-4-9b-chat
" --model-url http://localhost:8000/v1 --temp 0.8 --stop-token-ids --host "0.0.0.0" --port 8001"
python gradio_openai_chatbot_webserver.py --model "
mixtral/Mixtral-8x7B-Instruct-v0.1
" --model-url http://localhost:8000/v1 --temp 0.8 --stop-token-ids --host "0.0.0.0" --port 8001"
```
5.
使用对话服务
...
...
@@ -253,9 +251,9 @@ Prompt: '晚上睡不着怎么办', Generated text: '?\n晚上睡不着可以
## 源码仓库及问题反馈
*
[
https://developer.hpccube.com/codes/modelzoo/
llama
_vllm
](
https://developer.hpccube.com/codes/modelzoo/
chatglm
_vllm
)
*
[
https://developer.hpccube.com/codes/modelzoo/
mixtral_vllm
](
[https://developer.hpccube.com/codes/modelzoo/mixtral
_vllm](https://developer.hpccube.com/codes/modelzoo/
mixtral
_vllm
)
)
## 参考资料
*
[
https://github.com/vllm-project/vllm
](
https://github.com/vllm-project/vllm
)
*
[
https://github.com/THUDM/
ChatGLM
3
](
https://github.com/THUDM/
ChatGLM
3
)
*
[
https://github.com/THUDM/
mixtral
3
](
https://github.com/THUDM/
mixtral
3
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment