README.md 10.2 KB
Newer Older
lvhan028's avatar
lvhan028 committed
1
<div align="center">
Lyu Han's avatar
Lyu Han committed
2
  <img src="resources/lmdeploy-logo.svg" width="450"/>
lvhan028's avatar
lvhan028 committed
3

RunningLeon's avatar
RunningLeon committed
4
5
6
7
8
9
10
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://lmdeploy.readthedocs.io/en/latest/)
[![badge](https://github.com/InternLM/lmdeploy/workflows/lint/badge.svg)](https://github.com/InternLM/lmdeploy/actions)
[![PyPI](https://img.shields.io/pypi/v/lmdeploy)](https://pypi.org/project/lmdeploy)
[![license](https://img.shields.io/github/license/InternLM/lmdeploy.svg)](https://github.com/InternLM/lmdeploy/tree/main/LICENSE)
[![issue resolution](https://img.shields.io/github/issues-closed-raw/InternLM/lmdeploy)](https://github.com/InternLM/lmdeploy/issues)
[![open issues](https://img.shields.io/github/issues-raw/InternLM/lmdeploy)](https://github.com/InternLM/lmdeploy/issues)

lvhan028's avatar
lvhan028 committed
11
12
13
14
English | [简体中文](README_zh-CN.md)

</div>

15
<p align="center">
vansin's avatar
vansin committed
16
    👋 join us on <a href="https://twitter.com/intern_lm" target="_blank">Twitter</a>, <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=internwx" target="_blank">WeChat</a>
17
</p>
lvhan028's avatar
lvhan028 committed
18

19
20
______________________________________________________________________

q.yao's avatar
q.yao committed
21
## News 🎉
22

23
- \[2023/11\] TurboMind major upgrades, including: Paged Attention, faster attention kernels without sequence length limitation, 2x faster KV8 kernels, Split-K decoding (Flash Decoding), and W4A16 inference for sm_75
Chen Xin's avatar
Chen Xin committed
24
- \[2023/09\] TurboMind supports Qwen-14B
Lyu Han's avatar
Lyu Han committed
25
- \[2023/09\] TurboMind supports InternLM-20B
Lyu Han's avatar
Lyu Han committed
26
- \[2023/09\] TurboMind supports all features of Code Llama: code completion, infilling, chat / instruct, and python specialist. Click [here](./docs/en/supported_models/codellama.md) for deployment guide
27
- \[2023/09\] TurboMind supports Baichuan2-7B
q.yao's avatar
q.yao committed
28
- \[2023/08\] TurboMind supports flash-attention2.
29
- \[2023/08\] TurboMind supports Qwen-7B, dynamic NTK-RoPE scaling and dynamic logN scaling
Chen Xin's avatar
Chen Xin committed
30
- \[2023/08\] TurboMind supports Windows (tp=1)
31
- \[2023/08\] TurboMind supports 4-bit inference, 2.4x faster than FP16, the fastest open-source implementation🚀. Check [this](./docs/en/w4a16.md) guide for detailed info
pppppM's avatar
pppppM committed
32
33
- \[2023/08\] LMDeploy has launched on the [HuggingFace Hub](https://huggingface.co/lmdeploy), providing ready-to-use 4-bit models.
- \[2023/08\] LMDeploy supports 4-bit quantization using the [AWQ](https://arxiv.org/abs/2306.00978) algorithm.
34
35
- \[2023/07\] TurboMind supports Llama-2 70B with GQA.
- \[2023/07\] TurboMind supports Llama-2 7B/13B.
q.yao's avatar
q.yao committed
36
- \[2023/07\] TurboMind supports tensor-parallel inference of InternLM.
37
38
39

______________________________________________________________________

lvhan028's avatar
lvhan028 committed
40
41
## Introduction

lvhan028's avatar
lvhan028 committed
42
43
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the [MMRazor](https://github.com/open-mmlab/mmrazor) and [MMDeploy](https://github.com/open-mmlab/mmdeploy) teams. It has the following core features:

tpoisonooo's avatar
tpoisonooo committed
44
- **Efficient Inference Engine (TurboMind)**: Based on [FasterTransformer](https://github.com/NVIDIA/FasterTransformer), we have implemented an efficient inference engine - TurboMind, which supports the inference of LLaMA and its variant models on NVIDIA GPUs.
lvhan028's avatar
lvhan028 committed
45

46
- **Interactive Inference Mode**: By caching the k/v of attention during multi-round dialogue processes, it remembers dialogue history, thus avoiding repetitive processing of historical sessions.
lvhan028's avatar
lvhan028 committed
47

tpoisonooo's avatar
tpoisonooo committed
48
- **Multi-GPU Model Deployment and Quantization**: We provide comprehensive model deployment and quantification support, and have been validated at different scales.
49
50

- **Persistent Batch Inference**: Further optimization of model execution efficiency.
lvhan028's avatar
lvhan028 committed
51

pppppM's avatar
pppppM committed
52
![PersistentBatchInference](https://github.com/InternLM/lmdeploy/assets/67539920/e3876167-0671-44fc-ac52-5a0f9382493e)
lvhan028's avatar
lvhan028 committed
53

pppppM's avatar
pppppM committed
54
55
## Supported Models

56
`LMDeploy` has two inference backends, `Pytorch` and `TurboMind`. You can run `lmdeploy list` to check the supported model names.
pppppM's avatar
pppppM committed
57
58
59
60
61
62

### TurboMind

> **Note**<br />
> W4A16 inference requires Nvidia GPU with Ampere architecture or above.

Lyu Han's avatar
Lyu Han committed
63
64
65
66
|    Models    | Tensor Parallel | FP16 | KV INT8 | W4A16 | W8A8 |
| :----------: | :-------------: | :--: | :-----: | :---: | :--: |
|    Llama     |       Yes       | Yes  |   Yes   |  Yes  |  No  |
|    Llama2    |       Yes       | Yes  |   Yes   |  Yes  |  No  |
AllentDan's avatar
AllentDan committed
67
|    SOLAR     |       Yes       | Yes  |   Yes   |  Yes  |  No  |
Lyu Han's avatar
Lyu Han committed
68
69
| InternLM-7B  |       Yes       | Yes  |   Yes   |  Yes  |  No  |
| InternLM-20B |       Yes       | Yes  |   Yes   |  Yes  |  No  |
pppppM's avatar
pppppM committed
70
71
|   QWen-7B    |       Yes       | Yes  |   Yes   |  Yes  |  No  |
|   QWen-14B   |       Yes       | Yes  |   Yes   |  Yes  |  No  |
Lyu Han's avatar
Lyu Han committed
72
| Baichuan-7B  |       Yes       | Yes  |   Yes   |  Yes  |  No  |
pppppM's avatar
pppppM committed
73
| Baichuan2-7B |       Yes       | Yes  |   Yes   |  Yes  |  No  |
Lyu Han's avatar
Lyu Han committed
74
|  Code Llama  |       Yes       | Yes  |   No    |  No   |  No  |
pppppM's avatar
pppppM committed
75
76
77

### Pytorch

Lyu Han's avatar
Lyu Han committed
78
79
80
81
82
|   Models    | Tensor Parallel | FP16 | KV INT8 | W4A16 | W8A8 |
| :---------: | :-------------: | :--: | :-----: | :---: | :--: |
|    Llama    |       Yes       | Yes  |   No    |  No   |  No  |
|   Llama2    |       Yes       | Yes  |   No    |  No   |  No  |
| InternLM-7B |       Yes       | Yes  |   No    |  No   |  No  |
pppppM's avatar
pppppM committed
83

lvhan028's avatar
lvhan028 committed
84
85
## Performance

86
**Case I**: output token throughput with fixed input token and output token number (1, 2048)
lvhan028's avatar
lvhan028 committed
87

88
**Case II**: request throughput with real conversation data
lvhan028's avatar
lvhan028 committed
89

90
Test Setting: LLaMA-7B, NVIDIA A100(80G)
lvhan028's avatar
lvhan028 committed
91

92
93
The output token throughput of TurboMind exceeds 2000 tokens/s, which is about 5% - 15% higher than DeepSpeed overall and outperforms huggingface transformers by up to 2.3x.
And the request throughput of TurboMind is 30% higher than vLLM.
lvhan028's avatar
lvhan028 committed
94

95
![benchmark](https://github.com/InternLM/lmdeploy/assets/4560679/7775c518-608e-4e5b-be73-7645a444e774)
lvhan028's avatar
lvhan028 committed
96

lvhan028's avatar
lvhan028 committed
97
98
99
## Quick Start

### Installation
lvhan028's avatar
lvhan028 committed
100

101
Install lmdeploy with pip ( python 3.8+) or [from source](./docs/en/build.md)
lvhan028's avatar
lvhan028 committed
102
103

```shell
lvhan028's avatar
lvhan028 committed
104
pip install lmdeploy
lvhan028's avatar
lvhan028 committed
105
106
```

107
108
109
110
111
112
113
114
> **Note**<br />
> `pip install lmdeploy` can only install the runtime required packages. If users want to run codes from modules like `lmdeploy.lite` and `lmdeploy.serve`, they need to install the extra required packages.
> For instance, running `pip install lmdeploy[lite]` would install extra dependencies for `lmdeploy.lite` module.
>
> - `all`: Install lmdeploy with all dependencies in `requirements.txt`
> - `lite`: Install lmdeploy with extra dependencies in `requirements/lite.txt`
> - `serve`: Install lmdeploy with dependencies in `requirements/serve.txt`

lvhan028's avatar
lvhan028 committed
115
### Deploy InternLM
lvhan028's avatar
lvhan028 committed
116

lvhan028's avatar
lvhan028 committed
117
#### Get InternLM model
lvhan028's avatar
lvhan028 committed
118
119

```shell
lvhan028's avatar
lvhan028 committed
120
# 1. Download InternLM model
lvhan028's avatar
lvhan028 committed
121

pppppM's avatar
pppppM committed
122
123
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
124
git clone https://huggingface.co/internlm/internlm-chat-7b-v1_1 /path/to/internlm-chat-7b
pppppM's avatar
pppppM committed
125
126
127
128
129

# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1

lvhan028's avatar
lvhan028 committed
130
# 2. Convert InternLM model to turbomind's format, which will be in "./workspace" by default
131
lmdeploy convert internlm-chat-7b /path/to/internlm-chat-7b
lvhan028's avatar
lvhan028 committed
132
133
134

```

lvhan028's avatar
lvhan028 committed
135
#### Inference by TurboMind
lvhan028's avatar
lvhan028 committed
136
137

```shell
138
lmdeploy chat turbomind ./workspace
lvhan028's avatar
lvhan028 committed
139
140
```

141
142
143
144
145
146
147
> **Note**<br />
> When inferring with FP16 precision, the InternLM-7B model requires at least 15.7G of GPU memory overhead on TurboMind. <br />
> It is recommended to use NVIDIA cards such as 3090, V100, A100, etc.
> Disable GPU ECC can free up 10% memory, try `sudo nvidia-smi --ecc-config=0` and reboot system.

> **Note**<br />
> Tensor parallel is available to perform inference on multiple GPUs. Add `--tp=<num_gpu>` on `chat` to enable runtime TP.
lvhan028's avatar
lvhan028 committed
148

149
150
151
#### Serving with gradio

```shell
152
153
154
# install lmdeploy with extra dependencies
pip install lmdeploy[serve]

155
lmdeploy serve gradio ./workspace
156
157
158
159
```

![](https://github.com/InternLM/lmdeploy/assets/67539920/08d1e6f2-3767-44d5-8654-c85767cec2ab)

160
161
162
163
164
#### Serving with Restful API

Launch inference server by:

```shell
165
166
167
# install lmdeploy with extra dependencies
pip install lmdeploy[serve]

168
lmdeploy serve api_server ./workspace --instance_num 32 --tp 1
169
170
171
172
173
174
```

Then, you can communicate with it by command line,

```shell
# restful_api_url is what printed in api_server.py, e.g. http://localhost:23333
175
lmdeploy serve api_client api_server_url
176
177
178
179
180
```

or webui,

```shell
181
# api_server_url is what printed in api_server.py, e.g. http://localhost:23333
182
# server_ip and server_port here are for gradio ui
183
184
# example: lmdeploy serve gradio http://localhost:23333 --server_name localhost --server_port 6006
lmdeploy serve gradio api_server_url --server_name ${gradio_ui_ip} --server_port ${gradio_ui_port}
185
186
187
188
```

Refer to [restful_api.md](docs/en/restful_api.md) for more details.

189
#### Serving with Triton Inference Server
lvhan028's avatar
lvhan028 committed
190

lvhan028's avatar
lvhan028 committed
191
Launch inference server by:
lvhan028's avatar
lvhan028 committed
192
193

```shell
lvhan028's avatar
lvhan028 committed
194
bash workspace/service_docker_up.sh
lvhan028's avatar
lvhan028 committed
195
196
```

lvhan028's avatar
lvhan028 committed
197
Then, you can communicate with the inference server by command line,
lvhan028's avatar
lvhan028 committed
198
199

```shell
200
python3 -m pip install tritonclient[grpc]
201
lmdeploy serve triton_client {server_ip_addresss}:33337
lvhan028's avatar
lvhan028 committed
202
203
```

lvhan028's avatar
lvhan028 committed
204
or webui,
AllentDan's avatar
AllentDan committed
205

vansin's avatar
vansin committed
206
```shell
207
lmdeploy serve gradio {server_ip_addresss}:33337
AllentDan's avatar
AllentDan committed
208
209
```

210
For the deployment of other supported models, such as LLaMA, LLaMA-2, vicuna and so on, you can find the guide from [here](docs/en/serving.md)
lvhan028's avatar
lvhan028 committed
211

WRH's avatar
WRH committed
212
213
### Inference with PyTorch

214
For detailed instructions on Inference pytorch models, see [here](docs/en/pytorch.md).
215

WRH's avatar
WRH committed
216
217
218
#### Single GPU

```shell
219
lmdeploy chat torch $NAME_OR_PATH_TO_HF_MODEL \
WRH's avatar
WRH committed
220
221
222
223
224
225
226
227
228
    --max_new_tokens 64 \
    --temperture 0.8 \
    --top_p 0.95 \
    --seed 0
```

#### Tensor Parallel with DeepSpeed

```shell
WRH's avatar
WRH committed
229
deepspeed --module --num_gpus 2 lmdeploy.pytorch.chat \
WRH's avatar
WRH committed
230
231
232
233
234
235
236
    $NAME_OR_PATH_TO_HF_MODEL \
    --max_new_tokens 64 \
    --temperture 0.8 \
    --top_p 0.95 \
    --seed 0
```

237
238
239
240
241
242
You need to install deepspeed first to use this feature.

```
pip install deepspeed
```

243
244
## Quantization

pppppM's avatar
pppppM committed
245
246
#### Weight INT4 Quantization

247
LMDeploy uses [AWQ](https://arxiv.org/abs/2306.00978) algorithm for model weight quantization
pppppM's avatar
pppppM committed
248

249
[Click here](./docs/en/w4a16.md) to view the test results for weight int4 usage.
250

251
#### KV Cache INT8 Quantization
lvhan028's avatar
lvhan028 committed
252

253
[Click here](./docs/en/kv_int8.md) to view the usage method, implementation formula, and test results for kv int8.
254

255
> **Warning**<br />
256
> runtime Tensor Parallel for quantized model is not available. Please setup `--tp` on `deploy` to enable static TP.
257

lvhan028's avatar
lvhan028 committed
258
## Contributing
lvhan028's avatar
lvhan028 committed
259

lvhan028's avatar
lvhan028 committed
260
We appreciate all contributions to LMDeploy. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
261

lvhan028's avatar
lvhan028 committed
262
263
264
## Acknowledgement

- [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)
pppppM's avatar
pppppM committed
265
- [llm-awq](https://github.com/mit-han-lab/llm-awq)
lvhan028's avatar
lvhan028 committed
266
267
268
269

## License

This project is released under the [Apache 2.0 license](LICENSE).