README_zh-CN.md 5.35 KB
Newer Older
lvhan028's avatar
lvhan028 committed
1
<div align="center">
lvhan028's avatar
lvhan028 committed
2
  <img src="resources/lmdeploy-logo.png" width="450"/>
lvhan028's avatar
lvhan028 committed
3
4
5
6
7

[English](README.md) | 简体中文

</div>

8
9
10
<p align="center">
    👋 join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=internwx" target="_blank">WeChat</a>
</p>
lvhan028's avatar
lvhan028 committed
11

12
13
______________________________________________________________________

q.yao's avatar
q.yao committed
14
## 更新 🎉
15

16
17
- \[2023/07\] TurboMind 支持使用 GQA 的 Llama-2 70B 模型
- \[2023/07\] TurboMind 支持 Llama-2 7B/13B 模型
q.yao's avatar
q.yao committed
18
- \[2023/07\] TurboMind 支持 InternLM 的 Tensor Parallel 推理
19
20
21

______________________________________________________________________

lvhan028's avatar
lvhan028 committed
22
23
## 简介

24
25
LMDeploy 由 [MMDeploy](https://github.com/open-mmlab/mmdeploy)[MMRazor](https://github.com/open-mmlab/mmrazor) 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
这个强大的工具箱提供以下核心功能:
lvhan028's avatar
lvhan028 committed
26

lvhan028's avatar
lvhan028 committed
27
- **高效推理引擎 TurboMind**:基于 [FasterTransformer](https://github.com/NVIDIA/FasterTransformer),我们实现了高效推理引擎 TurboMind,支持 InternLM、LLaMA、vicuna等模型在 NVIDIA GPU 上的推理。
lvhan028's avatar
lvhan028 committed
28

29
- **交互推理方式**:通过缓存多轮对话过程中 attention 的 k/v,记住对话历史,从而避免重复处理历史会话。
lvhan028's avatar
lvhan028 committed
30

tpoisonooo's avatar
tpoisonooo committed
31
- **多 GPU 部署和量化**:我们提供了全面的模型部署和量化支持,已在不同规模上完成验证。
lvhan028's avatar
lvhan028 committed
32

33
34
- **persistent batch 推理**:进一步优化模型执行效率。

pppppM's avatar
pppppM committed
35
  ![PersistentBatchInference](https://github.com/InternLM/lmdeploy/assets/67539920/e3876167-0671-44fc-ac52-5a0f9382493e)
lvhan028's avatar
lvhan028 committed
36

lvhan028's avatar
lvhan028 committed
37
## 性能
lvhan028's avatar
lvhan028 committed
38

lvhan028's avatar
lvhan028 committed
39
如下图所示,我们对比了 facebookresearch/llama、HuggingFace Transformers、DeepSpeed 在 7B 模型上的token生成的速度。
lvhan028's avatar
lvhan028 committed
40

lvhan028's avatar
lvhan028 committed
41
测试设备:NVIDIA A100(80G)
lvhan028's avatar
lvhan028 committed
42

lvhan028's avatar
lvhan028 committed
43
测试指标:吞吐量(token/s)
lvhan028's avatar
lvhan028 committed
44

lvhan028's avatar
lvhan028 committed
45
测试数据:输入token数为1,生成token数为2048
lvhan028's avatar
lvhan028 committed
46

lvhan028's avatar
lvhan028 committed
47
TurboMind 的吞吐量超过 2000 token/s, 整体比 DeepSpeed 提升约 5% - 15%,比 huggingface transformers 提升 2.3 倍
lvhan028's avatar
lvhan028 committed
48

pppppM's avatar
pppppM committed
49
![benchmark](https://user-images.githubusercontent.com/12756472/251422522-e94a3db9-eb16-432a-8d8c-078945e7b99a.png)
lvhan028's avatar
lvhan028 committed
50

lvhan028's avatar
lvhan028 committed
51
## 快速上手
lvhan028's avatar
lvhan028 committed
52

lvhan028's avatar
lvhan028 committed
53
### 安装
lvhan028's avatar
lvhan028 committed
54
55

```shell
56
conda create -n lmdeploy python=3.10 -y
lvhan028's avatar
lvhan028 committed
57
58
59
60
conda activate lmdeploy
git clone https://github.com/InternLM/lmdeploy.git
cd lmdeploy
pip install -e .
lvhan028's avatar
lvhan028 committed
61
62
```

lvhan028's avatar
lvhan028 committed
63
### 部署 InternLM
lvhan028's avatar
lvhan028 committed
64

lvhan028's avatar
lvhan028 committed
65
#### 获取 InternLM 模型
lvhan028's avatar
lvhan028 committed
66
67

```shell
lvhan028's avatar
lvhan028 committed
68
# 1. 下载 InternLM 模型
lvhan028's avatar
lvhan028 committed
69

pppppM's avatar
pppppM committed
70
71
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
del-zhenwu's avatar
del-zhenwu committed
72
git clone https://huggingface.co/internlm/internlm-chat-7b /path/to/internlm-chat-7b
pppppM's avatar
pppppM committed
73
74
75
76
77

# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1

lvhan028's avatar
lvhan028 committed
78
# 2. 转换为 trubomind 要求的格式。默认存放路径为 ./workspace
79
python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-chat-7b
lvhan028's avatar
lvhan028 committed
80

lvhan028's avatar
lvhan028 committed
81
```
lvhan028's avatar
lvhan028 committed
82

lvhan028's avatar
lvhan028 committed
83
#### 使用 turbomind 推理
lvhan028's avatar
lvhan028 committed
84
85

```shell
tpoisonooo's avatar
tpoisonooo committed
86
docker run --gpus all --rm -v $(pwd)/workspace:/workspace -it openmmlab/lmdeploy:latest \
87
    python3 -m lmdeploy.turbomind.chat /workspace
lvhan028's avatar
lvhan028 committed
88
89
```

lvhan028's avatar
lvhan028 committed
90
```{note}
91
turbomind 在使用 FP16 精度推理 InternLM-7B 模型时,显存开销至少需要 15.7G。建议使用 3090, V100,A100等型号的显卡
lvhan028's avatar
lvhan028 committed
92
```
lvhan028's avatar
lvhan028 committed
93

lvhan028's avatar
lvhan028 committed
94
95
96
#### 部署推理服务

使用下面的命令启动推理服务:
lvhan028's avatar
lvhan028 committed
97
98

```shell
lvhan028's avatar
lvhan028 committed
99
bash workspace/service_docker_up.sh
lvhan028's avatar
lvhan028 committed
100
101
```

lvhan028's avatar
lvhan028 committed
102
你可以通过命令行方式与推理服务进行对话:
lvhan028's avatar
lvhan028 committed
103
104

```shell
105
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337
lvhan028's avatar
lvhan028 committed
106
107
```

lvhan028's avatar
lvhan028 committed
108
也可以通过 WebUI 方式来对话:
AllentDan's avatar
AllentDan committed
109

vansin's avatar
vansin committed
110
```shell
111
python3 -m lmdeploy.app {server_ip_addresss}:33337
AllentDan's avatar
AllentDan committed
112
```
lvhan028's avatar
lvhan028 committed
113

pppppM's avatar
pppppM committed
114
![](https://github.com/InternLM/lmdeploy/assets/67539920/08d1e6f2-3767-44d5-8654-c85767cec2ab)
AllentDan's avatar
AllentDan committed
115

116
其他模型的部署方式,比如 LLaMA,LLaMA-2,vicuna等等,请参考[这里](docs/zh_cn/serving.md)
lvhan028's avatar
lvhan028 committed
117

WRH's avatar
WRH committed
118
119
### 基于 PyTorch 的推理

120
121
122
123
124
125
你必须确保环境中有安装 deepspeed:

```
pip install deepspeed
```

WRH's avatar
WRH committed
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
#### 单个 GPU

```shell
python3 -m lmdeploy.pytorch.chat $NAME_OR_PATH_TO_HF_MODEL\
    --max_new_tokens 64 \
    --temperture 0.8 \
    --top_p 0.95 \
    --seed 0
```

#### 使用 DeepSpeed 实现张量并行

```shell
deepspeed --module --num_gpus 2 lmdeploy.pytorch.chat \
    $NAME_OR_PATH_TO_HF_MODEL \
    --max_new_tokens 64 \
    --temperture 0.8 \
    --top_p 0.95 \
    --seed 0
```

147
## 量化部署
lvhan028's avatar
lvhan028 committed
148

149
在 fp16 模式下,可以开启 kv_cache int8 量化,单卡可服务更多用户。
tpoisonooo's avatar
tpoisonooo committed
150
首先执行量化脚本,量化参数存放到 `deploy.py` 转换的 `workspace/triton_models/weights` 目录下。
151
152
153
154
155
156
157
158
159
160

```
python3 -m lmdeploy.lite.apis.kv_qparams \
  --model $HF_MODEL \
  --output_dir $DEPLOY_WEIGHT_DIR \
  --symmetry True \ # 对称量化或非对称量化,默认为 True
  --offload  False \ # 将模型放在 CPU,只在推理时加载部分模块到 GPU,默认为 False
  --num_tp 1  \  # Tensor 并行使用的 GPU 数,和 deploy.py 保持一致
```

tpoisonooo's avatar
tpoisonooo committed
161
然后调整 `workspace/triton_models/weights/config.ini`
lvhan028's avatar
lvhan028 committed
162
163
164

- `use_context_fmha` 改为 0,表示关闭
- `quant_policy` 设置为 4。此参数默认为 0,表示不开启
165

166
167
这里是[量化测试结果](./docs/zh_cn/quantization.md)

lvhan028's avatar
lvhan028 committed
168
169
## 贡献指南

lvhan028's avatar
lvhan028 committed
170
我们感谢所有的贡献者为改进和提升 LMDeploy 所作出的努力。请参考[贡献指南](.github/CONTRIBUTING.md)来了解参与项目贡献的相关指引。
lvhan028's avatar
lvhan028 committed
171
172
173
174
175
176
177
178

## 致谢

- [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)

## License

该项目采用 [Apache 2.0 开源许可证](LICENSE)