README.md 7.62 KB
Newer Older
xiabo's avatar
xiabo committed
1
# LLama
xiabo's avatar
xiabo committed
2
3
## 论文
- [https://arxiv.org/pdf/2302.13971.pdf](https://arxiv.org/pdf/2302.13971.pdf)
xiabo's avatar
xiabo committed
4

xiabo's avatar
xiabo committed
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
## 模型结构
LLAMA网络基于 Transformer 架构。提出了各种改进,并用于不同的模型,例如 PaLM。以下是与原始架构的主要区别:
预归一化。为了提高训练稳定性,对每个transformer 子层的输入进行归一化,而不是对输出进行归一化。使用 RMSNorm 归一化函数。
SwiGLU 激活函数 [PaLM]。使用 SwiGLU 激活函数替换 ReLU 非线性以提高性能。使用 2 /3 4d 的维度而不是 PaLM 中的 4d。
旋转嵌入。移除了绝对位置嵌入,而是添加了旋转位置嵌入 (RoPE),在网络的每一层。

![img](./docs/llama.png)

## 算法原理
LLama是一个基础语言模型的集合,参数范围从7B到65B。在数万亿的tokens上训练出的模型,并表明可以专门使用公开可用的数据集来训练最先进的模型,而不依赖于专有的和不可访问的数据集。

![img](./docs/llama_1.png)

## 环境配置

xiabo's avatar
xiabo committed
20
提供光源拉取推理的docker镜像:
xuxzh1's avatar
xuxzh1 committed
21
22
```bash
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10
xiabo's avatar
xiabo committed
23
24
25
# <Image ID>用上面拉取docker镜像的ID替换
# <Host Path>主机端路径
# <Container Path>容器映射路径
xuxzh1's avatar
xuxzh1 committed
26
27
28
29
docker run -it --network=host --name=llama_lmdeploy --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=1024G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1 -v /opt/hyhal:/opt/hyhal:ro -v <Host Path>:<Container Path> <Image ID> /bin/bash

#起容器之后安装软件依赖
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
xiabo's avatar
xiabo committed
30
31
```
镜像版本依赖:
xuxzh1's avatar
xuxzh1 committed
32
33
34
35

* DTK驱动:dtk24.04.1
* Pytorch: 2.1.0
* python: 3.10
xiabo's avatar
xiabo committed
36
37
38
39

## 数据集


xiabo's avatar
xiabo committed
40
41
## 推理

xiabo's avatar
xiabo committed
42
### 源码编译安装
xuxzh1's avatar
xuxzh1 committed
43
44
```bash
# 若使用光源的镜像,可以跳过源码编译安装,镜像里面安装好了lmdeploy
xiabo's avatar
xiabo committed
45
46
git clone http://developer.hpccube.com/codes/modelzoo/llama_lmdeploy.git
cd llama_lmdeploy
xuxzh1's avatar
xuxzh1 committed
47
git submodule init && git submodule updat
xiabo's avatar
xiabo committed
48
cd lmdeploy
xiabo's avatar
xiabo committed
49
50
51
52
53
54
55
56
57
58
mkdir build && cd build
sh ../generate.sh
make -j 32
make install
cd .. && python3 setup.py install
```
### 模型下载

[LLama](https://huggingface.co/meta-llama)

wanglch's avatar
wanglch committed
59
60
61
62
63
64
65
66
67
68
69
70
71
72
[LLama-7B](http://113.200.138.88:18080/aimodels/llama-7b-hf)

[LLama-13B](http://113.200.138.88:18080/aimodels/llama-13b-hf)

[LLama-33B](http://113.200.138.88:18080/aimodels/llama-30b-hf)

[LLama-65B](http://113.200.138.88:18080/aimodels/llama-65b-hf)

[LLama2-7B](http://113.200.138.88:18080/aimodels/Llama-2-7b-hf)

[LLama2-13B](http://113.200.138.88:18080/aimodels/Llama-2-13b-hf)

[LLama2-70B](http://113.200.138.88:18080/aimodels/Llama-2-70b-hf)

xiabo's avatar
xiabo committed
73
74
支持模型包括:LLama-7B、LLama-13B、LLama-30B、LLama-65B、LLama2-7B、LLama2-13B、LLama2-70B

xuxzh1's avatar
xuxzh1 committed
75
76
77
78
> [!CAUTION]
>
> 最新lmdepoly推理llama1:
>
xuxzh1's avatar
xuxzh1 committed
79
> 1.LLama-13B:需要在tokenizer_config.json中添加“unk_token”对应的值为"\<unk\>“
xuxzh1's avatar
xuxzh1 committed
80
>
xuxzh1's avatar
xuxzh1 committed
81
> 2.LLama-65B:config.json文件中“architectures”对应的[LlAmaForCausalLM]改成[LlamaForCausalLM]
xuxzh1's avatar
xuxzh1 committed
82

xiabo's avatar
xiabo committed
83
### 运行 LLama-7b
xuxzh1's avatar
xuxzh1 committed
84
```bash
xiabo's avatar
xiabo committed
85
86
87
# <model_name> 模型的名字 ('llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'llama2', 'qwen-7b', 'qwen-14b')
# <model_path> 模型路径
# <model_format> 模型的格式 ('llama', 'hf', 'qwen')
xiabo's avatar
xiabo committed
88
# <tokenizer_path> tokenizer模型的路径(默认None,会去model_path里面找tokenizer.model)
xiabo's avatar
xiabo committed
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
# <model_format> 保存输出的目标路径(默认./workspace)
# <tp> 用于张量并行的GPU数量应该是2^n

# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama7b --tp 1     # 输入问题后执行2次回车进行推理

# 服务器网页端运行

在bash端运行:
# <model_path_or_server> 部署模型的路径或tritonserver URL或restful api URL。前者用于与gradio直接运行服务。后者用于默认情况下使用tritonserver运行。如果输入URL是restful api。请启用另一个标志“restful_api”。
# <server_name> gradio服务器的ip地址
# <server_port> gradio服务器的ip的端口
# <batch_size> 于直接运行Turbomind的batch大小 (默认32)
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)
# <restful_api> modelpath_or_server的标志(默认是False)

xuxzh1's avatar
xuxzh1 committed
105
lmdeploy serve gradio --model_path_or_server ./workspace_llama7b --server_name {ip} --server_port {port} --batch_size 32 --tp 1 --restful_api False 
xiabo's avatar
xiabo committed
106

xuxzh1's avatar
xuxzh1 committed
107
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
108
109
110
```

### 运行 LLama-13b
xuxzh1's avatar
xuxzh1 committed
111
```bash
xiabo's avatar
xiabo committed
112
113
114
115
116
117
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama13b --tp 1

# 服务器网页端运行

在bash端运行:
xuxzh1's avatar
xuxzh1 committed
118
lmdeploy serve gradio --model_path_or_server ./workspace_llama13b --server_name {ip} --server_port {port} --batch_size 32 --tp 1 --restful_api False 
xiabo's avatar
xiabo committed
119

xuxzh1's avatar
xuxzh1 committed
120
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
121
122
```
### 运行 LLama-33b
xuxzh1's avatar
xuxzh1 committed
123
```bash
xiabo's avatar
xiabo committed
124
125
126
127
128
129
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama33b --tp 4

# 服务器网页端运行

在bash端运行:
xuxzh1's avatar
xuxzh1 committed
130
lmdeploy serve gradio --model_path_or_server ./workspace_llama33b --server_name {ip} --server_port {port} --batch_size 32 --tp 4 --restful_api False 
xiabo's avatar
xiabo committed
131

xuxzh1's avatar
xuxzh1 committed
132
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
133
134
135
```

### 运行 LLama-65b
xuxzh1's avatar
xuxzh1 committed
136
```bash
xiabo's avatar
xiabo committed
137
138
139
140
141
142
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama65b --tp 8

# 服务器网页端运行

在bash端运行:
xuxzh1's avatar
xuxzh1 committed
143
lmdeploy serve gradio --model_path_or_server ./workspace_llama65b --server_name {ip} --server_port {port} --batch_size 32 --tp 8 --restful_api False 
xiabo's avatar
xiabo committed
144

xuxzh1's avatar
xuxzh1 committed
145
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
146
147
148
```

### 运行 LLama2-7b
xuxzh1's avatar
xuxzh1 committed
149
```bash
xiabo's avatar
xiabo committed
150
151
152
153
154
155
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama2-7b --tp 1

# 服务器网页端运行

在bash端运行:
xuxzh1's avatar
xuxzh1 committed
156
lmdeploy serve gradio --model_path_or_server ./workspace_llama2-7b --server_name {ip} --server_port {port} --batch_size 32 --tp 1 --restful_api False 
xiabo's avatar
xiabo committed
157

xuxzh1's avatar
xuxzh1 committed
158
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
159
160
161
```

### 运行 LLama2-13b
xuxzh1's avatar
xuxzh1 committed
162
```bash
xiabo's avatar
xiabo committed
163
164
165
166
167
168
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama2-13b --tp 1

# 服务器网页端运行

在bash端运行:
xuxzh1's avatar
xuxzh1 committed
169
lmdeploy serve gradio --model_path_or_server ./workspace_llama2-13b --server_name {ip} --server_port {port} --batch_size 32 --tp 1 --restful_api False 
xiabo's avatar
xiabo committed
170

xuxzh1's avatar
xuxzh1 committed
171
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
172
173
174
```

### 运行 LLama2-70b
xuxzh1's avatar
xuxzh1 committed
175
```bash
xiabo's avatar
xiabo committed
176
177
178
179
180
181
# bash界面运行
lmdeploy chat turbomind --model_path ./workspace_llama2-70b --tp 8

# 服务器网页端运行

在bash端运行:
xuxzh1's avatar
xuxzh1 committed
182
lmdeploy serve gradio --model_path_or_server ./workspace_llama2-70b --server_name {ip} --server_port {port} --batch_size 32 --tp 8 --restful_api False 
xiabo's avatar
xiabo committed
183

xuxzh1's avatar
xuxzh1 committed
184
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
185
186
187
```

## result
xiabo's avatar
xiabo committed
188
![llama](docs/llama.gif)
xiabo's avatar
xiabo committed
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204

### 精度



## 应用场景

### 算法类别

`对话问答`


### 热点应用行业

`金融,科研,教育`

wanglch's avatar
wanglch committed
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
## 预训练权重

预训练权重快速下载中心:[SCNet AIModels](http://113.200.138.88:18080/aimodels)

项目中的预训练权重可从快速下载通道下载:

[llama-7b-hf](http://113.200.138.88:18080/aimodels/llama-7b-hf)
                                    
[llama-13b-hf](http://113.200.138.88:18080/aimodels/llama-13b-hf)

[LLama-33B](http://113.200.138.88:18080/aimodels/llama-30b-hf)

[LLama-65B](http://113.200.138.88:18080/aimodels/llama-65b-hf)

[LLama2-7B](http://113.200.138.88:18080/aimodels/Llama-2-7b-hf)

[LLama2-13B](http://113.200.138.88:18080/aimodels/Llama-2-13b-hf)

[LLama2-70B](http://113.200.138.88:18080/aimodels/Llama-2-70b-hf)

xiabo's avatar
xiabo committed
225
226
227
228
229
230

## 源码仓库及问题反馈
https://developer.hpccube.com/codes/modelzoo/llama_lmdeploy

## 参考资料
https://github.com/InternLM/LMDeploy
xuxzh1's avatar
xuxzh1 committed
231