You need to sign in or sign up before continuing.
README.md 9.37 KB
Newer Older
xiabo's avatar
xiabo committed
1
2
3
4
# <div align="center"><strong>LMdeploy</strong></div>
## 简介
LMDeploy 由 [MMDeploy](https://github.com/open-mmlab/mmdeploy)[MMRazor](https://github.com/open-mmlab/mmrazor) 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
这个强大的工具箱提供以下核心功能:
lvhan028's avatar
lvhan028 committed
5

xiabo's avatar
xiabo committed
6
- **高效推理引擎 TurboMind**:基于 [FasterTransformer](https://github.com/NVIDIA/FasterTransformer),我们实现了高效推理引擎 TurboMind,支持 InternLM、LLaMA、vicuna等模型在 NVIDIA GPU 上的推理。
RunningLeon's avatar
RunningLeon committed
7

xiabo's avatar
xiabo committed
8
- **交互推理方式**:通过缓存多轮对话过程中 attention 的 k/v,记住对话历史,从而避免重复处理历史会话。
lvhan028's avatar
lvhan028 committed
9

xiabo's avatar
xiabo committed
10
- **多 GPU 部署和量化**:我们提供了全面的模型部署和量化支持,已在不同规模上完成验证。
lvhan028's avatar
lvhan028 committed
11

xiabo's avatar
xiabo committed
12
- **persistent batch 推理**:进一步优化模型执行效率。
lvhan028's avatar
lvhan028 committed
13

xiabo's avatar
xiabo committed
14
15
persistent batch 推理:进一步优化模型执行效率。
LMdeploy官方github地址:[https://github.com/InternLM/lmdeploy](https://github.com/InternLM/lmdeploy)
xiabo's avatar
xiabo committed
16
17
18
19
20
21
22
23
24
25
26
## 支持模型
|     模型     | 模型并行 | FP16 | KV INT8 |
| :----------: | :------: | :--: | :-----: |
|    Llama     |   Yes    | Yes  |   Yes   |
|    Llama2    |   Yes    | Yes  |   Yes   |
| InternLM-7B  |   Yes    | Yes  |   Yes   |
| InternLM-20B |   Yes    | Yes  |   Yes   |
|   QWen-7B    |   Yes    | Yes  |   Yes   |
|   QWen-14B   |   Yes    | Yes  |   Yes   |
| Baichuan-7B  |   Yes    | Yes  |   Yes   |
| Baichuan2-7B |   Yes    | Yes  |   No    |
27

xiabo's avatar
xiabo committed
28
## 安装
29

xiabo's avatar
xiabo committed
30
### 使用源码编译方式安装
31

xiabo's avatar
xiabo committed
32
#### 编译环境准备
xiabo's avatar
xiabo committed
33
提供3种环境准备方式:
xiabo's avatar
xiabo committed
34

xiabo's avatar
xiabo committed
35
36
37
38
1.基于已有的镜像,可跳过源码编译,直接进行推理。
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:lmdeploy0.0.13_dtk23.04_torch1.13_py38
```
39

xiabo's avatar
xiabo committed
40
2. 基于光源pytorch基础镜像环境:镜像下载地址:[https://sourcefind.cn/#/image/dcu/pytorch](https://sourcefind.cn/#/image/dcu/pytorch),根据pytorch、python、dtk及系统下载对应的镜像版本。(若安装过慢,可以添加源:pip3 install xxx -i  https://pypi.tuna.tsinghua.edu.cn/simple/)
41
```shell
xiabo's avatar
xiabo committed
42
43
44
45
pip3 install -r requirements.txt
pip3 install transformers==4.33.2
pip3 install urllib3==1.24
pip3 install wheel
xiabo's avatar
xiabo committed
46
yum install rapidjson
xiabo's avatar
xiabo committed
47
48
49
50
51
52
53
54
55

# 执行dtk环境变量
source {DTK_PATH}/env.sh
source {DTK_PATH}/cuda/env.sh
# 升级gcc版本到9
yum install -y centos-release-scl
yum install -y devtoolset-9
scl enable devtoolset-9 bash

56
57
```

xiabo's avatar
xiabo committed
58
3. 基于现有python环境:安装pytorch,pytorch whl包下载目录:[https://cancon.hpccube.com:65024/4/main/pytorch/dtk23.04](https://cancon.hpccube.com:65024/4/main/pytorch/dtk23.04),根据python、dtk版本,下载对应pytorch的whl包。安装命令如下:
59
```shell
xiabo's avatar
xiabo committed
60
61
62
63
64
pip3 install torch* (下载的torch的whl包)
pip3 install -r requirements.txt
pip3 install transformers==4.33.2
pip3 install urllib3==1.24
pip3 install wheel
xiabo's avatar
xiabo committed
65
yum install rapidjson
xiabo's avatar
xiabo committed
66
67
68
69
70
71
72
73

# 执行dtk环境变量
source {DTK_PATH}/env.sh
source {DTK_PATH}/cuda/env.sh
# 升级gcc版本到9
yum install -y centos-release-scl
yum install -y devtoolset-9
scl enable devtoolset-9 bash
74
```
xiabo's avatar
xiabo committed
75
注:需要GCC版本>=9.0
76

xiabo's avatar
xiabo committed
77
78
#### 源码编译安装
- 代码下载
79
```shell
xiabo's avatar
xiabo committed
80
git clone http://10.0.54.20/xiabo/lmdeploy.git # 根据编译需要切换分支 默认develop分支
81
```
xiabo's avatar
xiabo committed
82
- 提供2种源码编译方式(进入mmcv目录):
83
```
xiabo's avatar
xiabo committed
84
85
86
87
88
89
90
91
92
93
94
95
1. 源码编译安装
mkdir build && cd build
sh ../generate.sh
make -j 32 && make install
cd .. && python3 setup.py install

2. 编译成whl包安装
mkdir build && cd build
sh ../generate.sh
make -j 32 && make install
cd .. && python3 setup.py bdist_wheel
cd dist && pip3 install lmdeploy*
lvhan028's avatar
lvhan028 committed
96
```
xiabo's avatar
xiabo committed
97
## 模型服务
lvhan028's avatar
lvhan028 committed
98

xiabo's avatar
xiabo committed
99
100
101
102
103
104
105
106
107
108
109
110
### 部署 [LLaMA](https://huggingface.co/huggyllama) 服务
请从[这里](https://huggingface.co/huggyllama) 下载 llama 模型,参考如下命令部署服务:
以7B为例:
```
1、模型转换
# <model_name> 模型的名字 ('llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'llama2', 'qwen-7b', 'qwen-14b',)
# <model_path> 模型路径
# <model_format> 模型的格式 ('llama', 'hf', 'qwen')
# <tokenizer_path> tokenizer模型的路径(默认None,会去model_path里面找qwen.tiktoken)
# <model_format> 保存输出的目标路径(默认./workspace)
# <tp> 用于张量并行的GPU数量应该是2^n

xiabo's avatar
xiabo committed
111
llmdeploy convert --model_name llama --model_path /path/to/model --model_format hf --tokenizer_path None --dst_path ./workspace_llama --tp 1
xiabo's avatar
xiabo committed
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131

2、运行
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama --tp 1     # 输入问题后执行2次回车进行推理

# 在服务器界面运行:
在bash端运行:
# <model_path_or_server> 部署模型的路径或tritonserver URL或restful api URL。前者用于与gradio直接运行服务。后者用于默认情况下使用tritonserver运行。如果输入URL是restful api。请启用另一个标志“restful_api”。
# <server_name> gradio服务器的ip地址
# <server_port> gradio服务器的ip的端口
# <batch_size> 于直接运行Turbomind的batch大小 (默认32)
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)
# <restful_api> modelpath_or_server的标志(默认是False)

lmdeploy serve gradio --model_path_or_server ./workspace_llama --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False 

在网页上输入{ip}:{pord}即可进行对话

```
### 部署 [llama2](https://huggingface.co/meta-llama) 服务
xiabo's avatar
xiabo committed
132
133
请从[这里](https://huggingface.co/meta-llama) 下载 llama2 模型,参考如下命令部署服务:
以7B为例:
lvhan028's avatar
lvhan028 committed
134
```
xiabo's avatar
xiabo committed
135
1、模型转换
xiabo's avatar
xiabo committed
136
lmdeploy convert --model_name llama2 --model_path /path/to/model --model_format hf --tokenizer_path None --dst_path ./workspace_llama2 --tp 1  # 
xiabo's avatar
xiabo committed
137
2、运行
xiabo's avatar
xiabo committed
138
139
140
141
142
143
144
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama2 --tp 1
# 在服务器界面运行:
在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_llama2 --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False 

在网页上输入{ip}:{pord}即可进行对话
AllentDan's avatar
AllentDan committed
145
```
xiabo's avatar
xiabo committed
146
### 部署 [internlm](https://huggingface.co/internlm/) 服务
xiabo's avatar
xiabo committed
147
请从[这里](https://huggingface.co/internlm) 下载 internlm 模型,参考如下命令部署服务:
xiabo's avatar
xiabo committed
148
以7B为例:
149
```
xiabo's avatar
xiabo committed
150
1、模型转换
xiabo's avatar
xiabo committed
151
lmdeploy convert --model_name model_name --model_path /path/to/model --model_format hf --tokenizer_path None --dst_path ./workspace_intern --tp 1  # 根据模型的类型选择model_name是internlm-chat还是internlm
xiabo's avatar
xiabo committed
152
2、运行
xiabo's avatar
xiabo committed
153
154
155
156
157
158
159
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_intern --tp 1
# 在服务器界面运行:
在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_intern --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False 

在网页上输入{ip}:{pord}即可进行对话
160
```
xiabo's avatar
xiabo committed
161
162
163
164
165
### 部署 [baichuan](https://huggingface.co/baichuan-inc) 服务
请从[这里](https://huggingface.co/baichuan-inc) 下载 baichuan 模型,参考如下命令部署服务:
以7B为例:
```
1、模型转换
xiabo's avatar
xiabo committed
166
lmdeploy convert --model_name baichuan-7b --model_path /path/to/model --model_format hf --tokenizer_path None --dst_path ./workspace_baichuan --tp 1
xiabo's avatar
xiabo committed
167
2、运行
xiabo's avatar
xiabo committed
168
169
170
171
172
173
174
175
176
177
178
179
180
181
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_baichuan --tp 1
# 在服务器界面运行:
在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_baichuan --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False 

在网页上输入{ip}:{pord}即可进行对话
```

### 部署 [baichuan2](https://huggingface.co/baichuan-inc) 服务
请从[这里](https://huggingface.co/baichuan-inc) 下载 baichuan2 模型,参考如下命令部署服务:
以7B为例:
```
1、模型转换
xiabo's avatar
xiabo committed
182
lmdeploy convert --model_name baichuan2-7b --model_path /path/to/model --model_format hf --tokenizer_path None --dst_path ./workspace_baichuan2 --tp 1
xiabo's avatar
xiabo committed
183
184
185
186
187
188
189
190
2、运行
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_baichuan2 --tp 1
# 在服务器界面运行:
在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_baichuan2 --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False 

在网页上输入{ip}:{pord}即可进行对话
xiabo's avatar
xiabo committed
191
```
xiabo's avatar
xiabo committed
192

xiabo's avatar
xiabo committed
193
194
195
196
197
### 部署 [qwen](https://huggingface.co/Qwen) 服务
请从[这里](https://huggingface.co/Qwen) 下载 qwen 模型,参考如下命令部署服务:
以7B为例:
```
1、模型转换
xiabo's avatar
xiabo committed
198
lmdeploy convert --model_name qwen-7b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwen --tp 1
xiabo's avatar
xiabo committed
199
2、运行
xiabo's avatar
xiabo committed
200
201
202
203
204
205
206
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwen --tp 1
# 在服务器界面运行:
在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_qwen --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False 

在网页上输入{ip}:{pord}即可进行对话
xiabo's avatar
xiabo committed
207
```
xiabo's avatar
xiabo committed
208
209
210
211

## result
![qwen推理](docs/dcu/qwen推理.gif)

xiabo's avatar
xiabo committed
212
213
214
### 详细可参考 [docs](./docs/zh_cn/serving.md) 
## 版本号查询
- python -c "import lmdeploy; lmdeploy.\_\_version__",版本号与官方版本同步,查询该软件的版本号,例如0.0.6;
215

xiabo's avatar
xiabo committed
216
217
## Known Issue
-
lvhan028's avatar
lvhan028 committed
218

xiabo's avatar
xiabo committed
219
220
## Note
+ 若使用pip install下载安装过慢,可添加pypi清华源:-i https://pypi.tuna.tsinghua.edu.cn/simple/
lvhan028's avatar
lvhan028 committed
221

xiabo's avatar
xiabo committed
222
223
224
## 其他参考
- [README_origin](README_origin.md)
- [README_zh-CN](README_zh-CN.md)