README_zh-CN.md 7.05 KB
Newer Older
lvhan028's avatar
lvhan028 committed
1
<div align="center">
lvhan028's avatar
lvhan028 committed
2
  <img src="resources/lmdeploy-logo.png" width="450"/>
lvhan028's avatar
lvhan028 committed
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
  <div>&nbsp;</div>
  <div align="center">
    <b><font size="5">OpenMMLab website</font></b>
    <sup>
        <a href="https://openmmlab.com">
        <i><font size="4">HOT</font></i>
      </a>
    </sup>
    &nbsp;&nbsp;&nbsp;&nbsp;
    <b><font size="5">OpenMMLab platform</font></b>
    <sup>
      <a href="https://platform.openmmlab.com">
        <i><font size="4">TRY IT OUT</font></i>
      </a>
    </sup>
  </div>
  <div>&nbsp;</div>

lvhan028's avatar
lvhan028 committed
21
22
23
24
25
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://lmdeploy.readthedocs.io/en/latest/)
[![codecov](https://codecov.io/gh/open-mmlab/lmdeploy/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/lmdeploy)
[![license](https://img.shields.io/github/license/open-mmlab/lmdeploy.svg)](https://github.com/open-mmlab/mmdeploy/tree/main/LICENSE)
[![issue resolution](https://img.shields.io/github/issues-closed-raw/open-mmlab/lmdeploy)](https://github.com/open-mmlab/lmdeploy/issues)
[![open issues](https://img.shields.io/github/issues-raw/open-mmlab/lmdeploy)](https://github.com/open-mmlab/lmdeploy/issues)
lvhan028's avatar
lvhan028 committed
26
27
28
29
30
31
32

[English](README.md) | 简体中文

</div>

<div align="center">
  <a href="https://openmmlab.medium.com/" style="text-decoration:none;">
lvhan028's avatar
lvhan028 committed
33
    <img src="https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png" width="3%" alt="" /></a>
lvhan028's avatar
lvhan028 committed
34
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
lvhan028's avatar
lvhan028 committed
35
  <a href="https://discord.com/channels/1037617289144569886/1046608014234370059" style="text-decoration:none;">
lvhan028's avatar
lvhan028 committed
36
37
38
39
40
41
42
    <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://twitter.com/OpenMMLab" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://www.youtube.com/openmmlab" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a>
lvhan028's avatar
lvhan028 committed
43
44
45
46
47
48
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://space.bilibili.com/1293512903" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://www.zhihu.com/people/openmmlab" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png" width="3%" alt="" /></a>
lvhan028's avatar
lvhan028 committed
49
50
51
52
</div>

## 简介

lvhan028's avatar
lvhan028 committed
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
LMDeploy 是 [MMRazor](https://github.com/open-mmlab/mmrazor)[MMDeploy](https://github.com/open-mmlab/mmdeploy) 团队联合开发的,针对 LLM 进行轻量化、部署和服务的工具箱。它拥有以下核心功能:

- 基于 [FasterTransformer](https://github.com/NVIDIA/FasterTransformer) 实现的高效推理引擎 **TurboMind**, 支持 LLaMA 及其变体模型在 NVIDIA 设备上的推理

- 实现 interactive mode 推理方式。通过缓存多轮对话过程中attention的k/v,记住对话历史,从而避免重复decode历史会话

<div align="center">
  <img src="https://github.com/NVIDIA/FasterTransformer/blob/main/docs/images/gpt/gpt_interactive_generation.2.png?raw=true" width="600"/>
</div>

- 支持 persistent batch 推理方式

  TODO: gif to show what persistent batch is

## 快速上手

### 安装
lvhan028's avatar
lvhan028 committed
70
71
72
73

```shell
conda create -n open-mmlab python=3.8
conda activate open-mmlab
lvhan028's avatar
lvhan028 committed
74
75
git clone https://github.com/open-mmlab/lmdeploy.git
cd lmdeploy
lvhan028's avatar
lvhan028 committed
76
77
78
pip install -e .
```

lvhan028's avatar
lvhan028 committed
79
80
81
82
83
84
85
86
87
### 编译

下载 docker image `openmmlab/lmdeploy:latest`,挂载 lmdeploy 的数据卷,启动 container,在 container 内执行以下命令:

```shell
mkdir build && cd build
../generate.sh
make -j$(nproc) && make install
```
lvhan028's avatar
lvhan028 committed
88
89
90

### 部署 [LLaMA](https://github.com/facebookresearch/llama) 服务

lvhan028's avatar
lvhan028 committed
91
请填写[这张表](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform),获取 LLaMA 模型权重。
lvhan028's avatar
lvhan028 committed
92

lvhan028's avatar
lvhan028 committed
93
执行如下命令,可以把 LLaMA 模型部署到 NVIDIA GPU Server:
lvhan028's avatar
lvhan028 committed
94
95
96
97
98

<details open>
<summary><b>7B</b></summary>

```shell
lvhan028's avatar
lvhan028 committed
99
python3 lmdeploy/serve/turbomind/deploy.py llama-7B /path/to/llama-7b llama \
lvhan028's avatar
lvhan028 committed
100
    --tokenizer_path /path/to/tokenizer/model
101
bash workspace/service_docker_up.sh --lib-dir $(pwd)/build/install/backends/fastertransformer
lvhan028's avatar
lvhan028 committed
102
103
104
105
106
107
108
109
```

</details>

<details open>
<summary><b>13B</b></summary>

```shell
lvhan028's avatar
lvhan028 committed
110
python3 lmdeploy/serve/turbomind/deploy.py llama-13B /path/to/llama-13b llama \
lvhan028's avatar
lvhan028 committed
111
    --tokenizer_path /path/to/tokenizer/model --tp 2
112
bash workspace/service_docker_up.sh --lib-dir $(pwd)/build/install/backends/fastertransformer
lvhan028's avatar
lvhan028 committed
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
```

</details>

### 部署 [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna/) 服务

<details open>
<summary><b>7B</b></summary>

```shell
python3 -m pip install fschat
python3 -m fastchat.model.apply_delta \
  --base-model-path /path/to/llama-7b \
  --target-model-path /path/to/vicuna-7b \
  --delta-path lmsys/vicuna-7b-delta-v1.1

lvhan028's avatar
lvhan028 committed
129
python3 lmdeploy/serve/turbomind/deploy.py vicuna-7B /path/to/vicuna-7b hf
130
bash workspace/service_docker_up.sh --lib-dir $(pwd)/build/install/backends/fastertransformer
lvhan028's avatar
lvhan028 committed
131
132
133
134
135
136
137
138
139
140
141
142
143
144
```

</details>

<details>
<summary><b>13B</b></summary>

```shell
python3 -m pip install fschat
python3 -m fastchat.model.apply_delta \
  --base-model-path /path/to/llama-13b \
  --target-model-path /path/to/vicuna-13b \
  --delta-path lmsys/vicuna-13b-delta-v1.1

lvhan028's avatar
lvhan028 committed
145
python3 lmdeploy/serve/turbomind/deploy.py vicuna-13B /path/to/vicuna-13b hf
146
bash workspace/service_docker_up.sh --lib-dir $(pwd)/build/install/backends/fastertransformer
lvhan028's avatar
lvhan028 committed
147
148
149
150
151
152
153
```

</details>

## 通过命令行推理

```shell
lvhan028's avatar
lvhan028 committed
154
python3 lmdeploy/serve/client.py {server_ip_addresss}:33337
lvhan028's avatar
lvhan028 committed
155
156
```

AllentDan's avatar
AllentDan committed
157
158
159
## 使用浏览器推理

```shell
lvhan028's avatar
lvhan028 committed
160
python3 lmdeploy/app.py {server_ip_addresss}:33337 {model_name}
AllentDan's avatar
AllentDan committed
161
```
lvhan028's avatar
lvhan028 committed
162

163
## 量化部署
lvhan028's avatar
lvhan028 committed
164

165
166
167
在 fp16 模式下,可以开启 kv_cache int8 量化,单卡可服务更多用户。
首先执行量化脚本,量化参数存放到 `deploy.py` 转换的 weight 目录下。
然后调整 `config.ini`
lvhan028's avatar
lvhan028 committed
168
169
170

- `use_context_fmha` 改为 0,表示关闭
- `quant_policy` 设置为 4。此参数默认为 0,表示不开启
171

lvhan028's avatar
lvhan028 committed
172
173
## 贡献指南

lvhan028's avatar
lvhan028 committed
174
我们感谢所有的贡献者为改进和提升 LMDeploy 所作出的努力。请参考[贡献指南](.github/CONTRIBUTING.md)来了解参与项目贡献的相关指引。
lvhan028's avatar
lvhan028 committed
175
176
177
178
179
180
181
182

## 致谢

- [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)

## License

该项目采用 [Apache 2.0 开源许可证](LICENSE)