Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
InternLM_lmdeploy
Commits
027a2e75
Commit
027a2e75
authored
Aug 28, 2024
by
xuxzh1
🎱
Browse files
update
parent
07438a8b
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
7 additions
and
12 deletions
+7
-12
.gitmodules
.gitmodules
+2
-2
README.md
README.md
+4
-9
lmdeploy
lmdeploy
+1
-1
No files found.
.gitmodules
View file @
027a2e75
[submodule "lmdeploy"]
[submodule "lmdeploy"]
path = lmdeploy
path = lmdeploy
url = http://developer.hpccube.com/codes/aicomponent/lmdeploy.git
url = https://developer.hpccube.com/codes/OpenDAS/lmdeploy
tag = dtk23.04-v0.0.13
branch = dtk24.04-v0.2.6
\ No newline at end of file
\ No newline at end of file
README.md
View file @
027a2e75
...
@@ -71,10 +71,8 @@ cd .. && python3 setup.py install
...
@@ -71,10 +71,8 @@ cd .. && python3 setup.py install
# <model_format> 保存输出的目标路径(默认./workspace)
# <model_format> 保存输出的目标路径(默认./workspace)
# <tp> 用于张量并行的GPU数量应该是2^n
# <tp> 用于张量并行的GPU数量应该是2^n
lmdeploy convert
--model_name
internlm-chat-7b
--model_path
/path/to/model
--model_format
hf
--tokenizer_path
None
--dst_path
./workspace_interlm7b
--tp
1
# bash界面运行
# bash界面运行
lmdeploy chat turbomind
--model_path
./workspace_interlm7b
--tp
1
# 输入问题后执行2次回车进行推理
lmdeploy chat turbomind ./workspace_interlm7b
--tp
1
# 输入问题后执行2次回车进行推理
# 服务器网页端运行
# 服务器网页端运行
...
@@ -86,23 +84,20 @@ lmdeploy chat turbomind --model_path ./workspace_interlm7b --tp 1 # 输入
...
@@ -86,23 +84,20 @@ lmdeploy chat turbomind --model_path ./workspace_interlm7b --tp 1 # 输入
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)
# <restful_api> modelpath_or_server的标志(默认是False)
# <restful_api> modelpath_or_server的标志(默认是False)
lmdeploy serve gradio
--model_path_or_server
./workspace_interlm7b
--server
_
name
{
ip
}
--server
_
port
{
port
}
--batch_size
32
--tp
1
--restful_api
False
lmdeploy serve gradio ./workspace_interlm7b
--server
-
name
{
ip
}
--server
-
port
{
port
}
--batch_size
32
--tp
1
--restful_api
False
在网页上输入
{
ip
}
:
{
port
}
即可进行对话
在网页上输入
{
ip
}
:
{
port
}
即可进行对话
```
```
### 运行 internlm-chat-20b
### 运行 internlm-chat-20b
```
bash
```
bash
# 模型转换
lmdeploy convert
--model_name
internlm-chat-20b
--model_path
/path/to/model
--model_format
hf
--tokenizer_path
None
--dst_path
./workspace_interlm20b
--tp
4
# bash界面运行
# bash界面运行
lmdeploy chat turbomind
--model_path
./workspace_interlm20b
--tp
4
lmdeploy chat turbomind ./workspace_interlm20b
--tp
4
# 服务器网页端运行
# 服务器网页端运行
在bash端运行:
在bash端运行:
lmdeploy serve gradio
--model_path_or_server
./workspace_interlm20b
--server
_
name
{
ip
}
--server
_
port
{
port
}
--batch_size
32
--tp
4
--restful_api
False
lmdeploy serve gradio ./workspace_interlm20b
--server
-
name
{
ip
}
--server
-
port
{
port
}
--batch_size
32
--tp
4
--restful_api
False
在网页上输入
{
ip
}
:
{
port
}
即可进行对话
在网页上输入
{
ip
}
:
{
port
}
即可进行对话
```
```
...
...
lmdeploy
@
98d217bf
Subproject commit
e432dbb0e56caaf319b9c9d7b79eb8106852dc91
Subproject commit
98d217bf91a55fd0a48b5476a55d6399fd65cfd0
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment