README_zh-CN.md 5.29 KB
Newer Older
lvhan028's avatar
lvhan028 committed
1
<div align="center">
lvhan028's avatar
lvhan028 committed
2
  <img src="resources/lmdeploy-logo.png" width="450"/>
lvhan028's avatar
lvhan028 committed
3
4
5
6
7

[English](README.md) | 简体中文

</div>

8
9
10
<p align="center">
    👋 join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=internwx" target="_blank">WeChat</a>
</p>
lvhan028's avatar
lvhan028 committed
11

12
13
______________________________________________________________________

q.yao's avatar
q.yao committed
14
## 更新 🎉
15

q.yao's avatar
q.yao committed
16
17
- \[2023/07\] TurboMind 支持 InternLM 的 Tensor Parallel 推理
- \[2023/07\] TurboMind 支持 Llama2 7b/13b 模型
18
19
20

______________________________________________________________________

lvhan028's avatar
lvhan028 committed
21
22
## 简介

23
24
LMDeploy 由 [MMDeploy](https://github.com/open-mmlab/mmdeploy)[MMRazor](https://github.com/open-mmlab/mmrazor) 团队联合开发,是涵盖了 LLM 任务的全套轻量化、部署和服务解决方案。
这个强大的工具箱提供以下核心功能:
lvhan028's avatar
lvhan028 committed
25

lvhan028's avatar
lvhan028 committed
26
- **高效推理引擎 TurboMind**:基于 [FasterTransformer](https://github.com/NVIDIA/FasterTransformer),我们实现了高效推理引擎 TurboMind,支持 InternLM、LLaMA、vicuna等模型在 NVIDIA GPU 上的推理。
lvhan028's avatar
lvhan028 committed
27

28
- **交互推理方式**:通过缓存多轮对话过程中 attention 的 k/v,记住对话历史,从而避免重复处理历史会话。
lvhan028's avatar
lvhan028 committed
29

tpoisonooo's avatar
tpoisonooo committed
30
- **多 GPU 部署和量化**:我们提供了全面的模型部署和量化支持,已在不同规模上完成验证。
lvhan028's avatar
lvhan028 committed
31

32
33
- **persistent batch 推理**:进一步优化模型执行效率。

pppppM's avatar
pppppM committed
34
  ![PersistentBatchInference](https://github.com/InternLM/lmdeploy/assets/67539920/e3876167-0671-44fc-ac52-5a0f9382493e)
lvhan028's avatar
lvhan028 committed
35

lvhan028's avatar
lvhan028 committed
36
## 性能
lvhan028's avatar
lvhan028 committed
37

lvhan028's avatar
lvhan028 committed
38
如下图所示,我们对比了 facebookresearch/llama、HuggingFace Transformers、DeepSpeed 在 7B 模型上的token生成的速度。
lvhan028's avatar
lvhan028 committed
39

lvhan028's avatar
lvhan028 committed
40
测试设备:NVIDIA A100(80G)
lvhan028's avatar
lvhan028 committed
41

lvhan028's avatar
lvhan028 committed
42
测试指标:吞吐量(token/s)
lvhan028's avatar
lvhan028 committed
43

lvhan028's avatar
lvhan028 committed
44
测试数据:输入token数为1,生成token数为2048
lvhan028's avatar
lvhan028 committed
45

lvhan028's avatar
lvhan028 committed
46
TurboMind 的吞吐量超过 2000 token/s, 整体比 DeepSpeed 提升约 5% - 15%,比 huggingface transformers 提升 2.3 倍
lvhan028's avatar
lvhan028 committed
47

pppppM's avatar
pppppM committed
48
![benchmark](https://user-images.githubusercontent.com/12756472/251422522-e94a3db9-eb16-432a-8d8c-078945e7b99a.png)
lvhan028's avatar
lvhan028 committed
49

lvhan028's avatar
lvhan028 committed
50
## 快速上手
lvhan028's avatar
lvhan028 committed
51

lvhan028's avatar
lvhan028 committed
52
### 安装
lvhan028's avatar
lvhan028 committed
53
54

```shell
lvhan028's avatar
lvhan028 committed
55
56
57
58
59
conda create -n lmdeploy python=3.10
conda activate lmdeploy
git clone https://github.com/InternLM/lmdeploy.git
cd lmdeploy
pip install -e .
lvhan028's avatar
lvhan028 committed
60
61
```

lvhan028's avatar
lvhan028 committed
62
### 部署 InternLM
lvhan028's avatar
lvhan028 committed
63

lvhan028's avatar
lvhan028 committed
64
#### 获取 InternLM 模型
lvhan028's avatar
lvhan028 committed
65
66

```shell
lvhan028's avatar
lvhan028 committed
67
# 1. 下载 InternLM 模型
lvhan028's avatar
lvhan028 committed
68

pppppM's avatar
pppppM committed
69
70
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
del-zhenwu's avatar
del-zhenwu committed
71
git clone https://huggingface.co/internlm/internlm-chat-7b /path/to/internlm-chat-7b
pppppM's avatar
pppppM committed
72
73
74
75
76

# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1

lvhan028's avatar
lvhan028 committed
77
# 2. 转换为 trubomind 要求的格式。默认存放路径为 ./workspace
del-zhenwu's avatar
del-zhenwu committed
78
python3 -m lmdeploy.serve.turbomind.deploy internlm-7b /path/to/internlm-chat-7b hf
lvhan028's avatar
lvhan028 committed
79

lvhan028's avatar
lvhan028 committed
80
```
lvhan028's avatar
lvhan028 committed
81

lvhan028's avatar
lvhan028 committed
82
#### 使用 turbomind 推理
lvhan028's avatar
lvhan028 committed
83
84

```shell
tpoisonooo's avatar
tpoisonooo committed
85
docker run --gpus all --rm -v $(pwd)/workspace:/workspace -it openmmlab/lmdeploy:latest \
lvhan028's avatar
lvhan028 committed
86
    python3 -m lmdeploy.turbomind.chat internlm /workspace
lvhan028's avatar
lvhan028 committed
87
88
```

lvhan028's avatar
lvhan028 committed
89
90
91
```{note}
turbomind 在使用 FP16 精度推理 InternLM-7B 模型时,显存开销至少需要 22.7G。建议使用 3090, V100,A100等型号的显卡
```
lvhan028's avatar
lvhan028 committed
92

lvhan028's avatar
lvhan028 committed
93
94
95
#### 部署推理服务

使用下面的命令启动推理服务:
lvhan028's avatar
lvhan028 committed
96
97

```shell
lvhan028's avatar
lvhan028 committed
98
bash workspace/service_docker_up.sh
lvhan028's avatar
lvhan028 committed
99
100
```

lvhan028's avatar
lvhan028 committed
101
你可以通过命令行方式与推理服务进行对话:
lvhan028's avatar
lvhan028 committed
102
103

```shell
104
python3 -m lmdeploy.serve.client {server_ip_addresss}:33337 internlm
lvhan028's avatar
lvhan028 committed
105
106
```

lvhan028's avatar
lvhan028 committed
107
也可以通过 WebUI 方式来对话:
AllentDan's avatar
AllentDan committed
108

vansin's avatar
vansin committed
109
```shell
110
python3 -m lmdeploy.app {server_ip_addresss}:33337 internlm
AllentDan's avatar
AllentDan committed
111
```
lvhan028's avatar
lvhan028 committed
112

pppppM's avatar
pppppM committed
113
![](https://github.com/InternLM/lmdeploy/assets/67539920/08d1e6f2-3767-44d5-8654-c85767cec2ab)
AllentDan's avatar
AllentDan committed
114

lvhan028's avatar
lvhan028 committed
115
116
其他模型的部署方式,比如 LLaMA,vicuna,请参考[这里](docs/zh_cn/serving.md)

WRH's avatar
WRH committed
117
118
### 基于 PyTorch 的推理

119
120
121
122
123
124
你必须确保环境中有安装 deepspeed:

```
pip install deepspeed
```

WRH's avatar
WRH committed
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
#### 单个 GPU

```shell
python3 -m lmdeploy.pytorch.chat $NAME_OR_PATH_TO_HF_MODEL\
    --max_new_tokens 64 \
    --temperture 0.8 \
    --top_p 0.95 \
    --seed 0
```

#### 使用 DeepSpeed 实现张量并行

```shell
deepspeed --module --num_gpus 2 lmdeploy.pytorch.chat \
    $NAME_OR_PATH_TO_HF_MODEL \
    --max_new_tokens 64 \
    --temperture 0.8 \
    --top_p 0.95 \
    --seed 0
```

146
## 量化部署
lvhan028's avatar
lvhan028 committed
147

148
在 fp16 模式下,可以开启 kv_cache int8 量化,单卡可服务更多用户。
tpoisonooo's avatar
tpoisonooo committed
149
首先执行量化脚本,量化参数存放到 `deploy.py` 转换的 `workspace/triton_models/weights` 目录下。
150
151
152
153
154
155
156
157
158
159

```
python3 -m lmdeploy.lite.apis.kv_qparams \
  --model $HF_MODEL \
  --output_dir $DEPLOY_WEIGHT_DIR \
  --symmetry True \ # 对称量化或非对称量化,默认为 True
  --offload  False \ # 将模型放在 CPU,只在推理时加载部分模块到 GPU,默认为 False
  --num_tp 1  \  # Tensor 并行使用的 GPU 数,和 deploy.py 保持一致
```

tpoisonooo's avatar
tpoisonooo committed
160
然后调整 `workspace/triton_models/weights/config.ini`
lvhan028's avatar
lvhan028 committed
161
162
163

- `use_context_fmha` 改为 0,表示关闭
- `quant_policy` 设置为 4。此参数默认为 0,表示不开启
164

165
166
这里是[量化测试结果](./docs/zh_cn/quantization.md)

lvhan028's avatar
lvhan028 committed
167
168
## 贡献指南

lvhan028's avatar
lvhan028 committed
169
我们感谢所有的贡献者为改进和提升 LMDeploy 所作出的努力。请参考[贡献指南](.github/CONTRIBUTING.md)来了解参与项目贡献的相关指引。
lvhan028's avatar
lvhan028 committed
170
171
172
173
174
175
176
177

## 致谢

- [FasterTransformer](https://github.com/NVIDIA/FasterTransformer)

## License

该项目采用 [Apache 2.0 开源许可证](LICENSE)