Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
Baichuan_lmdeploy
Commits
7a7c3829
"src/vscode:/vscode.git/clone" did not exist on "01a80807de9727fe9ccb1b35d1ea447647738111"
Commit
7a7c3829
authored
Aug 23, 2024
by
xuxzh1
🎱
Browse files
update readme
parent
477559fb
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
24 additions
and
11 deletions
+24
-11
README.md
README.md
+24
-11
No files found.
README.md
View file @
7a7c3829
...
@@ -23,7 +23,8 @@ Baichuan整体模型基于标准的Transformer结构,采用了和LLaMA一样
...
@@ -23,7 +23,8 @@ Baichuan整体模型基于标准的Transformer结构,采用了和LLaMA一样
## 环境配置
## 环境配置
提供光源拉取推理的docker镜像:
提供光源拉取推理的docker镜像:
```
```
bash
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10
(
推荐
)
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:lmdeploy0.0.13_dtk23.04_torch1.13_py38
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:lmdeploy0.0.13_dtk23.04_torch1.13_py38
# <Image ID>用上面拉取docker镜像的ID替换
# <Image ID>用上面拉取docker镜像的ID替换
# <Host Path>主机端路径
# <Host Path>主机端路径
...
@@ -31,9 +32,21 @@ docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:lmdeploy0.0.13_dtk23.
...
@@ -31,9 +32,21 @@ docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:lmdeploy0.0.13_dtk23.
docker run
-it
--name
baichuan
--shm-size
=
1024G
--device
=
/dev/kfd
--device
=
/dev/dri/
--cap-add
=
SYS_PTRACE
--security-opt
seccomp
=
unconfined
--ulimit
memlock
=
-1
:-1
--ipc
=
host
--network
host
--group-add
video
-v
<Host Path>:<Container Path> <Image ID> /bin/bash
docker run
-it
--name
baichuan
--shm-size
=
1024G
--device
=
/dev/kfd
--device
=
/dev/dri/
--cap-add
=
SYS_PTRACE
--security-opt
seccomp
=
unconfined
--ulimit
memlock
=
-1
:-1
--ipc
=
host
--network
host
--group-add
video
-v
<Host Path>:<Container Path> <Image ID> /bin/bash
```
```
镜像版本依赖:
镜像版本依赖:
*
DTK驱动:dtk23.04
*
DTK驱动:24.04.1
*
Pytorch: 1.13
*
Pytorch: 2.1.0
*
python: python3.8
*
python: python3.10
> [!NOTE]
>
> 使用lmdeploy0.0.13_dtk23.04_torch1.13_py38如果遇到 importError:libgemm multiB int4.so: cannot open shared obiect file: No such file or directory
>
> 解决方法:
>
> ```bash
> rm /usr/local/lib/python3.8/site-packages/_turbomind.cpython-38-x86_64-linux-gnu.so
> ```
## 数据集
## 数据集
无
无
...
@@ -41,7 +54,7 @@ docker run -it --name baichuan --shm-size=1024G --device=/dev/kfd --device=/dev
...
@@ -41,7 +54,7 @@ docker run -it --name baichuan --shm-size=1024G --device=/dev/kfd --device=/dev
## 推理
## 推理
### 源码编译安装
### 源码编译安装
```
```
bash
# 若使用光源的镜像,可以跳过源码编译安装,镜像里面安装好了lmdeploy。
# 若使用光源的镜像,可以跳过源码编译安装,镜像里面安装好了lmdeploy。
git clone http://developer.hpccube.com/codes/modelzoo/llama_lmdeploy.git
git clone http://developer.hpccube.com/codes/modelzoo/llama_lmdeploy.git
cd
llama_lmdeploy
cd
llama_lmdeploy
...
@@ -61,7 +74,7 @@ cd .. && python3 setup.py install
...
@@ -61,7 +74,7 @@ cd .. && python3 setup.py install
### 运行 baichuan-7b-chat
### 运行 baichuan-7b-chat
```
```
bash
# 模型转换
# 模型转换
# <model_name> 模型的名字 ('llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'llama2', 'qwen-7b', 'qwen-14b')
# <model_name> 模型的名字 ('llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'llama2', 'qwen-7b', 'qwen-14b')
# <model_path> 模型路径
# <model_path> 模型路径
...
@@ -85,13 +98,13 @@ lmdeploy chat turbomind --model_path ./workspace_baichuan7b --tp 1 # 输入
...
@@ -85,13 +98,13 @@ lmdeploy chat turbomind --model_path ./workspace_baichuan7b --tp 1 # 输入
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)
# <restful_api> modelpath_or_server的标志(默认是False)
# <restful_api> modelpath_or_server的标志(默认是False)
lmdeploy serve gradio --model_path_or_server ./workspace_baichuan7b --server_name {ip} --server_port {por
d
} --batch_size 32 --tp 1 --restful_api False
lmdeploy serve gradio
--model_path_or_server
./workspace_baichuan7b
--server_name
{
ip
}
--server_port
{
por
t
}
--batch_size
32
--tp
1
--restful_api
False
在网页上输入{ip}:{por
d
}即可进行对话
在网页上输入
{
ip
}
:
{
por
t
}
即可进行对话
```
```
### 运行 baichuan2-7b
### 运行 baichuan2-7b
```
```
bash
# 模型转换
# 模型转换
lmdeploy convert
--model_name
baichuan2-7b
--model_path
/path/to/model
--model_format
hf
--tokenizer_path
None
--dst_path
./workspace_baichuan2-7b
--tp
1
lmdeploy convert
--model_name
baichuan2-7b
--model_path
/path/to/model
--model_format
hf
--tokenizer_path
None
--dst_path
./workspace_baichuan2-7b
--tp
1
...
@@ -101,9 +114,9 @@ lmdeploy chat turbomind --model_path ./workspace_baichuan2-7b --tp 1
...
@@ -101,9 +114,9 @@ lmdeploy chat turbomind --model_path ./workspace_baichuan2-7b --tp 1
# 服务器网页端运行
# 服务器网页端运行
在bash端运行:
在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_baichuan2-7b --server_name {ip} --server_port {por
d
} --batch_size 32 --tp 1 --restful_api False
lmdeploy serve gradio
--model_path_or_server
./workspace_baichuan2-7b
--server_name
{
ip
}
--server_port
{
por
t
}
--batch_size
32
--tp
1
--restful_api
False
在网页上输入{ip}:{por
d
}即可进行对话
在网页上输入
{
ip
}
:
{
por
t
}
即可进行对话
```
```
## result
## result
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment