Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Qwen_lmdeploy
Commits
243ff4a4
Commit
243ff4a4
authored
Nov 22, 2023
by
xiabo
Browse files
Update README.md
parent
594fa515
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
4 deletions
+4
-4
README.md
README.md
+4
-4
No files found.
README.md
View file @
243ff4a4
...
@@ -61,7 +61,7 @@ cd .. && python3 setup.py install
...
@@ -61,7 +61,7 @@ cd .. && python3 setup.py install
# <model_name> 模型的名字 ('base', 'llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'puyu', 'llama2', 'qwen-7b', 'qwen-14b', 'codellama', 'solar')
# <model_name> 模型的名字 ('base', 'llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'puyu', 'llama2', 'qwen-7b', 'qwen-14b', 'codellama', 'solar')
# <model_path> 模型路径
# <model_path> 模型路径
# <model_format> 模型的格式 ('llama', 'hf', 'qwen')
# <model_format> 模型的格式 ('llama', 'hf', 'qwen')
# <tokenizer_path> tokenizer模型的路径
# <tokenizer_path> tokenizer模型的路径
(默认None,会去model_path里面找qwen.tiktoken)
# <model_format> 保存输出的目标路径(默认./workspace)
# <model_format> 保存输出的目标路径(默认./workspace)
# <tp> 用于张量并行的GPU数量应该是2^n
# <tp> 用于张量并行的GPU数量应该是2^n
# <quant_path> 量化模型的路径,可以为None(用于int4量化,使用默认None)
# <quant_path> 量化模型的路径,可以为None(用于int4量化,使用默认None)
...
@@ -89,15 +89,15 @@ lmdeploy serve gradio --model_path_or_server ./workspace_qwe7b --server_name {ip
...
@@ -89,15 +89,15 @@ lmdeploy serve gradio --model_path_or_server ./workspace_qwe7b --server_name {ip
### 运行 Qwen-14B-chat
### 运行 Qwen-14B-chat
```
```
# 模型转换
# 模型转换
mdeploy convert --model_name qwen-
7
b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwe
7
b --tp 2 --quant_path None --group_size 0
mdeploy convert --model_name qwen-
14
b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwe
14
b --tp 2 --quant_path None --group_size 0
# bash界面运行
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwe
7
b --tp 2
lmdeploy chat turbomind --model_path ./workspace_qwe
14
b --tp 2
# 服务器网页端运行
# 服务器网页端运行
在bash端运行:
在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_qwe
7
b --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False
lmdeploy serve gradio --model_path_or_server ./workspace_qwe
14
b --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False
在网页上输入{ip}:{pord}即可进行对话
在网页上输入{ip}:{pord}即可进行对话
```
```
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment