README.md 7.2 KB
Newer Older
lvhan028's avatar
lvhan028 committed
1
<div align="center">
lvhan028's avatar
lvhan028 committed
2
  <img src="resources/lmdeploy-logo.png" width="450"/>
lvhan028's avatar
lvhan028 committed
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
  <div>&nbsp;</div>
  <div align="center">
    <b><font size="5">OpenMMLab website</font></b>
    <sup>
        <a href="https://openmmlab.com">
        <i><font size="4">HOT</font></i>
      </a>
    </sup>
    &nbsp;&nbsp;&nbsp;&nbsp;
    <b><font size="5">OpenMMLab platform</font></b>
    <sup>
      <a href="https://platform.openmmlab.com">
        <i><font size="4">TRY IT OUT</font></i>
      </a>
    </sup>
  </div>
  <div>&nbsp;</div>

lvhan028's avatar
lvhan028 committed
21
22
23
24
25
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://lmdeploy.readthedocs.io/en/latest/)
[![codecov](https://codecov.io/gh/open-mmlab/lmdeploy/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/lmdeploy)
[![license](https://img.shields.io/github/license/open-mmlab/lmdeploy.svg)](https://github.com/open-mmlab/mmdeploy/tree/main/LICENSE)
[![issue resolution](https://img.shields.io/github/issues-closed-raw/open-mmlab/lmdeploy)](https://github.com/open-mmlab/lmdeploy/issues)
[![open issues](https://img.shields.io/github/issues-raw/open-mmlab/lmdeploy)](https://github.com/open-mmlab/lmdeploy/issues)
lvhan028's avatar
lvhan028 committed
26
27
28
29
30
31
32

English | [简体中文](README_zh-CN.md)

</div>

<div align="center">
  <a href="https://openmmlab.medium.com/" style="text-decoration:none;">
lvhan028's avatar
lvhan028 committed
33
    <img src="https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png" width="3%" alt="" /></a>
lvhan028's avatar
lvhan028 committed
34
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
lvhan028's avatar
lvhan028 committed
35
  <a href="https://discord.com/channels/1037617289144569886/1046608014234370059" style="text-decoration:none;">
lvhan028's avatar
lvhan028 committed
36
37
38
39
40
41
42
    <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://twitter.com/OpenMMLab" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://www.youtube.com/openmmlab" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a>
lvhan028's avatar
lvhan028 committed
43
44
45
46
47
48
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://space.bilibili.com/1293512903" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://www.zhihu.com/people/openmmlab" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png" width="3%" alt="" /></a>
lvhan028's avatar
lvhan028 committed
49
50
51
52
</div>

## Introduction

lvhan028's avatar
lvhan028 committed
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the [MMRazor](https://github.com/open-mmlab/mmrazor) and [MMDeploy](https://github.com/open-mmlab/mmdeploy) teams. It has the following core features:

- A high throughput inference engine named as **TurboMind** based on [FasterTransformer](https://github.com/NVIDIA/FasterTransformer) for LLaMA family models

- Interactive generation is supported. LMDeploy can remember the history by caching the attention k/v in multi-turn dialogues, so that it can avoid repetitive decoding of historical conversations.

<div align="center">
  <img src="https://github.com/NVIDIA/FasterTransformer/blob/main/docs/images/gpt/gpt_interactive_generation.2.png?raw=true" width="600"/>
</div>

- Support persistent-batch inference

  TODO: gif to show what persistent batch is

## Quick Start

### Installation
lvhan028's avatar
lvhan028 committed
70
71
72
73
74
75

Below are quick steps for installation:

```shell
conda create -n open-mmlab python=3.8
conda activate open-mmlab
lvhan028's avatar
lvhan028 committed
76
77
git clone https://github.com/open-mmlab/lmdeploy.git
cd lmdeploy
lvhan028's avatar
lvhan028 committed
78
79
80
pip install -e .
```

lvhan028's avatar
lvhan028 committed
81
82
### Build

lvhan028's avatar
lvhan028 committed
83
Pull docker image `openmmlab/lmdeploy:latest` and build lmdeploy libs in its launched container
lvhan028's avatar
lvhan028 committed
84
85
86
87
88
89
90

```shell
mkdir build && cd build
../generate.sh
make -j$(nproc) && make install
```

lvhan028's avatar
lvhan028 committed
91
92
93
94
95
96
97
98
99
100
### Serving [LLaMA](https://github.com/facebookresearch/llama)

Weights for the LLaMA models can be obtained from by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form)

Run one of the following commands to serve a LLaMA model on NVIDIA GPU server:

<details open>
<summary><b>7B</b></summary>

```shell
lvhan028's avatar
lvhan028 committed
101
python3 lmdeploy/serve/turbomind/deploy.py llama-7B /path/to/llama-7b llama \
lvhan028's avatar
lvhan028 committed
102
    --tokenizer_path /path/to/tokenizer/model
tpoisonooo's avatar
tpoisonooo committed
103
bash workspace/service_docker_up.sh --lib-dir $(pwd)/build/install/backends/fastertransformer
lvhan028's avatar
lvhan028 committed
104
105
106
107
108
109
110
111
```

</details>

<details open>
<summary><b>13B</b></summary>

```shell
lvhan028's avatar
lvhan028 committed
112
python3 lmdeploy/serve/turbomind/deploy.py llama-13B /path/to/llama-13b llama \
lvhan028's avatar
lvhan028 committed
113
    --tokenizer_path /path/to/tokenizer/model --tp 2
tpoisonooo's avatar
tpoisonooo committed
114
bash workspace/service_docker_up.sh --lib-dir $(pwd)/build/install/backends/fastertransformer
lvhan028's avatar
lvhan028 committed
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
```

</details>

### Serving [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna/)

<details open>
<summary><b>7B</b></summary>

```shell
python3 -m pip install fschat
python3 -m fastchat.model.apply_delta \
  --base-model-path /path/to/llama-7b \
  --target-model-path /path/to/vicuna-7b \
  --delta-path lmsys/vicuna-7b-delta-v1.1

lvhan028's avatar
lvhan028 committed
131
python3 lmdeploy/serve/turbomind/deploy.py vicuna-7B /path/to/vicuna-7b hf
tpoisonooo's avatar
tpoisonooo committed
132
bash workspace/service_docker_up.sh --lib-dir $(pwd)/build/install/backends/fastertransformer
lvhan028's avatar
lvhan028 committed
133
134
135
136
137
138
139
140
141
142
143
144
145
146
```

</details>

<details>
<summary><b>13B</b></summary>

```shell
python3 -m pip install fschat
python3 -m fastchat.model.apply_delta \
  --base-model-path /path/to/llama-13b \
  --target-model-path /path/to/vicuna-13b \
  --delta-path lmsys/vicuna-13b-delta-v1.1

lvhan028's avatar
lvhan028 committed
147
python3 lmdeploy/serve/turbomind/deploy.py vicuna-13B /path/to/vicuna-13b hf
tpoisonooo's avatar
tpoisonooo committed
148
bash workspace/service_docker_up.sh --lib-dir $(pwd)/build/install/backends/fastertransformer
lvhan028's avatar
lvhan028 committed
149
150
151
152
153
154
155
```

</details>

## Inference with Command Line Interface

```shell
lvhan028's avatar
lvhan028 committed
156
python3 lmdeploy/serve/client.py {server_ip_addresss}:33337
lvhan028's avatar
lvhan028 committed
157
158
```

AllentDan's avatar
AllentDan committed
159
160
161
## Inference with Web UI

```shell
lvhan028's avatar
lvhan028 committed
162
python3 lmdeploy/app.py {server_ip_addresss}:33337 {model_name}
AllentDan's avatar
AllentDan committed
163
164
165
```

## User Guide
lvhan028's avatar
lvhan028 committed
166

167
168
169
170
171
## Quantization

In fp16 mode, kv_cache int8 quantization can be enabled, and a single card can serve more users.
First execute the quantization script, and the quantization parameters are stored in the weight directory transformed by `deploy.py`.
Then adjust `config.ini`
lvhan028's avatar
lvhan028 committed
172

lvhan028's avatar
lvhan028 committed
173
174
- `use_context_fmha` changed to 0, means off
- `quant_policy` is set to 4. This parameter defaults to 0, which means it is not enabled
lvhan028's avatar
lvhan028 committed
175

lvhan028's avatar
lvhan028 committed
176
## Contributing
lvhan028's avatar
lvhan028 committed
177

lvhan028's avatar
lvhan028 committed
178
We appreciate all contributions to LMDeploy. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
179

lvhan028's avatar
lvhan028 committed
180
181
182
183
184
185
186
## Acknowledgement

- [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)

## License

This project is released under the [Apache 2.0 license](LICENSE).