Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Qwen_lmdeploy
Commits
eafad7d3
Commit
eafad7d3
authored
Jan 31, 2024
by
zhouxiang
Browse files
完善readme
parent
6dbf6277
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
8 deletions
+6
-8
README.md
README.md
+6
-8
No files found.
README.md
View file @
eafad7d3
...
@@ -66,16 +66,14 @@ cd .. && python3 setup.py install
...
@@ -66,16 +66,14 @@ cd .. && python3 setup.py install
### 运行 Qwen-7B-chat
### 运行 Qwen-7B-chat
```
```
# 模型转换
# 模型转换
# <model_name> 模型的名字 (
'base',
'llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'puyu', 'llama2', 'qwen-7b', 'qwen-14b', 'codellama', 'solar')
# <model_name> 模型的名字 ('llama', 'internlm', 'vicuna',
'wizardlM',
'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'puyu', 'llama2', 'qwen-7b', 'qwen-14b',
'qwen-72b',
'codellama', 'solar'
, 'ultralm', 'ultracm', 'yi'
)
# <model_path> 模型路径
# <model_path> 模型路径
# <model_format> 模型的格式 ('llama', 'hf'
, 'qwen'
)
# <model_format> 模型的格式 ('llama', 'hf'
, None。可以不写默认None,代码会根据模型选择格式
)
# <tokenizer_path> tokenizer模型的路径
# <tokenizer_path> tokenizer模型的路径
(默认None,会去model_path里面找对应的其他模型:'tokenizer.model',千问:'qwen.tiktoken')
# <model_format> 保存输出的目标路径(默认./workspace)
# <model_format> 保存输出的目标路径(默认./workspace)
# <tp> 用于张量并行的GPU数量应该是2^n
# <tp> 用于张量并行的GPU数量应该是2^n
# <quant_path> 量化模型的路径,可以为None(用于int4量化,使用默认None)
# <group_size> AWQ中用于将fp16权重量化为4位的参数(用于int4量化,使用默认'0')
mdeploy convert --model_name qwen-7b --model_path /path/to/model --
model_format qwen --tokenizer_path None --dst_path ./workspace_qwe7b --tp 1 --quant_path None --group_size 0
mdeploy convert --model_name qwen-7b --model_path /path/to/model --
dst_path ./workspace_qwe7b --tp 1
# bash界面运行
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwe7b --tp 1 # 输入问题后执行2次回车进行推理
lmdeploy chat turbomind --model_path ./workspace_qwe7b --tp 1 # 输入问题后执行2次回车进行推理
...
@@ -97,7 +95,7 @@ lmdeploy serve gradio --model_path_or_server ./workspace_qwe7b --server_name {ip
...
@@ -97,7 +95,7 @@ lmdeploy serve gradio --model_path_or_server ./workspace_qwe7b --server_name {ip
### 运行 Qwen-14B-chat
### 运行 Qwen-14B-chat
```
```
# 模型转换
# 模型转换
mdeploy convert --model_name qwen-14b --model_path /path/to/model
--model_format qwen --tokenizer_path None
--dst_path ./workspace_qwen14b --tp 2
--quant_path None --group_size 0
mdeploy convert --model_name qwen-14b --model_path /path/to/model --dst_path ./workspace_qwen14b --tp 2
# bash界面运行
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwen14b --tp 2
lmdeploy chat turbomind --model_path ./workspace_qwen14b --tp 2
...
@@ -113,7 +111,7 @@ lmdeploy serve gradio --model_path_or_server ./workspace_qwen14b --server_name {
...
@@ -113,7 +111,7 @@ lmdeploy serve gradio --model_path_or_server ./workspace_qwen14b --server_name {
```
```
# 模型转换
# 模型转换
lmdeploy convert --model_name qwen-72b --model_path /path/to/model
--model_format qwen --tokenizer_path None
--dst_path ./workspace_qwen72b --tp 8
--quant_path None --group_size 0
lmdeploy convert --model_name qwen-72b --model_path /path/to/model --dst_path ./workspace_qwen72b --tp 8
# bash界面运行
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwen72b --tp 8
lmdeploy chat turbomind --model_path ./workspace_qwen72b --tp 8
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment