README.md 5.71 KB
Newer Older
xiabo's avatar
xiabo committed
1
# Qwen
xiabo's avatar
xiabo committed
2
## 论文
lvhan028's avatar
lvhan028 committed
3

xiabo's avatar
xiabo committed
4
`Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities`
RunningLeon's avatar
RunningLeon committed
5

xiabo's avatar
xiabo committed
6
https://arxiv.org/pdf/2308.12966.pdf
lvhan028's avatar
lvhan028 committed
7

xiabo's avatar
xiabo committed
8
## 模型结构
lvhan028's avatar
lvhan028 committed
9

xiabo's avatar
xiabo committed
10
通义千问(Qwen) 是阿里云研发的通义千问大模型系列的70/140亿参数规模的模型。Qwen是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。同时,在Qwen-7B的基础上,使用对齐机制打造了基于大语言模型的AI助手Qwen-7B-Chat。
lvhan028's avatar
lvhan028 committed
11

xiabo's avatar
xiabo committed
12
本项目主要针对Qwen-Chat在DCU平台的推理性能优化,达到DCU平台较快的对话效果。
13

xiabo's avatar
xiabo committed
14
![qwen](docs/transformer.jpg)
15
16


xiabo's avatar
xiabo committed
17
## 算法原理
18

xiabo's avatar
xiabo committed
19
Qwen的构建采用了类似LLaMA的架构。与标准transformer的主要差异有:1)使用非连接嵌入、2)使用旋转位置嵌入、3)在注意力中除了QKV外不使用偏置、4)使用RMSNorm代替LayerNorm、5)使用SwiGLU代替ReLU、以及6)采用快速注意力来加速训练。该模型共有32层,嵌入维度为4096,注意力头数为32。
20

xiabo's avatar
xiabo committed
21
![qwen](docs/qwen.png)
lvhan028's avatar
lvhan028 committed
22

xiabo's avatar
xiabo committed
23
## 环境配置
xiabo's avatar
xiabo committed
24
提供[光源](https://www.sourcefind.cn/#/service-details)拉取推理的docker镜像:
AllentDan's avatar
AllentDan committed
25
```
zhouxiang's avatar
zhouxiang committed
26
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:lmdeploy0.0.13_dtk23.04_torch1.13_py38
xiabo's avatar
xiabo committed
27
28
29
30
# <Image ID>用上面拉取docker镜像的ID替换
# <Host Path>主机端路径
# <Container Path>容器映射路径
docker run -it --name qwen --shm-size=1024G  --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v <Host Path>:<Container Path> <Image ID> /bin/bash
31
```
xiabo's avatar
xiabo committed
32
33
34
35
36
37
38
39
40
镜像版本依赖:
* DTK驱动:dtk23.04
* Pytorch: 1.13
* python: python3.8

## 数据集


## 推理
xiabo's avatar
xiabo committed
41
42
43
### 源码编译安装
```
# 若使用光源的镜像,可以不用源码编译,镜像里面安装好了lmdeploy,可跳过源码编译安装
zhouxiang's avatar
zhouxiang committed
44
45
46
47
48
# 若需要使用的是qwen72b模型,则必须重新使用源码编译安装lmdeploy
git clone http://developer.hpccube.com/codes/modelzoo/Qwen_lmdeploy.git
cd codellama_lmdeploy
git submodule init && git submodule update
cd lmdeploy
xiabo's avatar
xiabo committed
49
50
51
52
53
54
55
56
mkdir build && cd build
sh ../generate.sh
make -j 32
make install
cd .. && python3 setup.py install

```

xiabo's avatar
xiabo committed
57
58
59
60
61
62
### 模型下载

[Qwen-7B-chat](https://huggingface.co/Qwen/Qwen-7B-Chat/tree/main)

[Qwen-14B-chat](https://huggingface.co/Qwen/Qwen-14B-Chat/tree/main)

zhouxiang's avatar
zhouxiang committed
63
64
[Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat)

xiabo's avatar
xiabo committed
65
### 运行 Qwen-7B-chat
xiabo's avatar
xiabo committed
66
```
xiabo's avatar
xiabo committed
67
68
69
70
# 模型转换
# <model_name> 模型的名字 ('base', 'llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'puyu', 'llama2', 'qwen-7b', 'qwen-14b', 'codellama', 'solar')
# <model_path> 模型路径
# <model_format> 模型的格式 ('llama', 'hf', 'qwen')
xiabo's avatar
xiabo committed
71
# <tokenizer_path> tokenizer模型的路径
xiabo's avatar
xiabo committed
72
73
# <model_format> 保存输出的目标路径(默认./workspace)
# <tp> 用于张量并行的GPU数量应该是2^n
xiabo's avatar
xiabo committed
74
75
# <quant_path> 量化模型的路径,可以为None(用于int4量化,使用默认None)
# <group_size> AWQ中用于将fp16权重量化为4位的参数(用于int4量化,使用默认'0')
xiabo's avatar
xiabo committed
76

xiabo's avatar
xiabo committed
77
mdeploy convert --model_name qwen-7b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwe7b --tp 1 --quant_path None --group_size 0
xiabo's avatar
xiabo committed
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94

# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwe7b --tp 1     # 输入问题后执行2次回车进行推理

# 服务器网页端运行

在bash端运行:
# <model_path_or_server> 部署模型的路径或tritonserver URL或restful api URL。前者用于与gradio直接运行服务。后者用于默认情况下使用tritonserver运行。如果输入URL是restful api。请启用另一个标志“restful_api”。
# <server_name> gradio服务器的ip地址
# <server_port> gradio服务器的ip的端口
# <batch_size> 于直接运行Turbomind的batch大小 (默认32)
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)
# <restful_api> modelpath_or_server的标志(默认是False)

lmdeploy serve gradio --model_path_or_server ./workspace_qwe7b --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False 

在网页上输入{ip}:{pord}即可进行对话
xiabo's avatar
xiabo committed
95
```
xiabo's avatar
xiabo committed
96
### 运行 Qwen-14B-chat
xiabo's avatar
xiabo committed
97
```
xiabo's avatar
xiabo committed
98
# 模型转换
zhouxiang's avatar
zhouxiang committed
99
mdeploy convert --model_name qwen-14b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwen14b --tp 2 --quant_path None --group_size 0
xiabo's avatar
xiabo committed
100
101

# bash界面运行
zhouxiang's avatar
zhouxiang committed
102
lmdeploy chat turbomind --model_path ./workspace_qwen14b --tp 2
xiabo's avatar
xiabo committed
103
104
105
106

# 服务器网页端运行

在bash端运行:
zhouxiang's avatar
zhouxiang committed
107
108
109
110
111
112
113
114
lmdeploy serve gradio --model_path_or_server ./workspace_qwen14b --server_name {ip} --server_port {pord} --batch_size 32 --tp 2 --restful_api False 

在网页上输入{ip}:{pord}即可进行对话
```
### 运行 Qwen-72B-chat

```
# 模型转换
zhouxiang's avatar
zhouxiang committed
115
lmdeploy convert --model_name qwen-72b --model_path /path/to/model --model_format qwen --tokenizer_path None --dst_path ./workspace_qwen72b --tp 8 --quant_path None --group_size 0
zhouxiang's avatar
zhouxiang committed
116
117
118
119
120
121
122
123

# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_qwen72b --tp 8

# 服务器网页端运行

在bash端运行:
lmdeploy serve gradio --model_path_or_server ./workspace_qwen72b --server_name {ip} --server_port {pord} --batch_size 32 --tp 8 --restful_api False 
xiabo's avatar
xiabo committed
124
125

在网页上输入{ip}:{pord}即可进行对话
xiabo's avatar
xiabo committed
126
```
zhouxiang's avatar
zhouxiang committed
127

xiabo's avatar
xiabo committed
128
## result
zhouxiang's avatar
zhouxiang committed
129

xiabo's avatar
xiabo committed
130
![qwen推理](docs/qwen推理.gif)
xiabo's avatar
xiabo committed
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145

### 精度



## 应用场景

### 算法类别

`对话问答`


### 热点应用行业

`医疗,科研,金融,教育`
146

lvhan028's avatar
lvhan028 committed
147

xiabo's avatar
xiabo committed
148
149
## 源码仓库及问题反馈
https://developer.hpccube.com/codes/modelzoo/qwen_lmdeploy
lvhan028's avatar
lvhan028 committed
150

xiabo's avatar
xiabo committed
151
152
## 参考资料
https://github.com/InternLM/LMDeploy