"vscode:/vscode.git/clone" did not exist on "5de4051bcf88c51d7d74752caf33029363a7bfaa"
Commit 6afe24b1 authored by zhouxiang's avatar zhouxiang
Browse files

新增qwen72b支持

parent f3f9a9a3
[submodule "lmdeploy"] [submodule "lmdeploy"]
path = lmdeploy path = lmdeploy
url = http://developer.hpccube.com/codes/aicomponent/lmdeploy.git url = http://developer.hpccube.com/codes/aicomponent/lmdeploy.git
tag = dtk23.04-v0.0.13 branch = dtk23.10-v0.0.13-qwen
\ No newline at end of file \ No newline at end of file
...@@ -55,6 +55,8 @@ cd .. && python3 setup.py install ...@@ -55,6 +55,8 @@ cd .. && python3 setup.py install
[Qwen-14B-chat](https://huggingface.co/Qwen/Qwen-14B-Chat/tree/main) [Qwen-14B-chat](https://huggingface.co/Qwen/Qwen-14B-Chat/tree/main)
[Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat)
### 运行 Qwen-7B-chat ### 运行 Qwen-7B-chat
``` ```
# 模型转换 # 模型转换
...@@ -89,19 +91,37 @@ lmdeploy serve gradio --model_path_or_server ./workspace_qwe7b --server_name {ip ...@@ -89,19 +91,37 @@ lmdeploy serve gradio --model_path_or_server ./workspace_qwe7b --server_name {ip
### 运行 Qwen-14B-chat ### 运行 Qwen-14B-chat
``` ```
# 模型转换 # 模型转换
mdeploy convert --model_name qwen-7b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwe7b --tp 2 --quant_path None --group_size 0 mdeploy convert --model_name qwen-14b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwen14b --tp 2 --quant_path None --group_size 0
# bash界面运行 # bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwe7b --tp 2 lmdeploy chat turbomind --model_path ./workspace_qwen14b --tp 2
# 服务器网页端运行 # 服务器网页端运行
在bash端运行: 在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_qwe7b --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False lmdeploy serve gradio --model_path_or_server ./workspace_qwen14b --server_name {ip} --server_port {pord} --batch_size 32 --tp 2 --restful_api False
在网页上输入{ip}:{pord}即可进行对话
```
### 运行 Qwen-72B-chat
```
# 模型转换
mdeploy convert --model_name qwen-72b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwen72b --tp 8 --quant_path None --group_size 0
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwen72b --tp 8
# 服务器网页端运行
在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_qwen72b --server_name {ip} --server_port {pord} --batch_size 32 --tp 8 --restful_api False
在网页上输入{ip}:{pord}即可进行对话 在网页上输入{ip}:{pord}即可进行对话
``` ```
## result ## result
![qwen推理](docs/qwen推理.gif) ![qwen推理](docs/qwen推理.gif)
### 精度 ### 精度
......
lmdeploy @ 0189f17c
Subproject commit e432dbb0e56caaf319b9c9d7b79eb8106852dc91 Subproject commit 0189f17c859b879781235bd57163aae1b00f1e72
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment