README.md 2.88 KB
Newer Older
hepj987's avatar
hepj987 committed
1
# Qwen-7B-chat
hepj987's avatar
hepj987 committed
2

hepj987's avatar
hepj987 committed
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
## 论文

Qwen-7B-chat语言模型目前只有技术报告,报告地址:

https://github.com/QwenLM/Qwen-7B/blob/main/tech_memo.md

Qwen-7B上增加视觉编码器得到Qwen-VL,论文与地址:

`Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities`

https://arxiv.org/pdf/2308.12966.pdf

## 模型结构

![qwen](qwen.jpg)
hepj987's avatar
hepj987 committed
18
19
20
21
22

```
通义千问-7B(Qwen-7B) 是阿里云研发的通义千问大模型系列的70亿参数规模的模型。Qwen-7B是基于Transformer的大语言模型, 在超大规模的预训练数据上进行训练得到。预训练数据类型多样,覆盖广泛,包括大量网络文本、专业书籍、代码等。
```

hepj987's avatar
hepj987 committed
23
## 算法原理
hepj987's avatar
hepj987 committed
24

hepj987's avatar
hepj987 committed
25
![qwen](qwen.png)
hepj987's avatar
hepj987 committed
26
27

```
hepj987's avatar
hepj987 committed
28
模型架构:Qwen-7B的构建采用了类似LLaMA的架构。与标准transformer的主要差异有:1)使用非连接嵌入、2)使用旋转位置嵌入、3)在注意力中除了QKV外不使用偏置、4)使用RMSNorm代替LayerNorm、5)使用SwiGLU代替ReLU、以及6)采用快速注意力来加速训练。该模型共有32层,嵌入维度为4096,注意力头数为32。
hepj987's avatar
hepj987 committed
29
30
```

hepj987's avatar
hepj987 committed
31
## 环境配置
hepj987's avatar
hepj987 committed
32
33
34
35
36

推荐使用docker方式运行,提供[光源](https://www.sourcefind.cn/#/main-page)拉取的docker镜像:

```
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.13.1-centos7.6-dtk-23.04-py39-latest
hepj987's avatar
hepj987 committed
37
38

docker run -dit --network=host --name=qwen_pytorch --privileged --device=/dev/kfd --device=/dev/dri --ipc=host --shm-size=16G  --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root --ulimit stack=-1:-1 --ulimit memlock=-1:-1  image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.13.1-centos7.6-dtk-23.04-py39-latest
hepj987's avatar
hepj987 committed
39
40
41
42
43
44
45
46
47
48
```

进入docker

```
pip install -r requirements.txt  -i https://mirrors.aliyun.com/pypi/simple/  --trusted-host mirrors.aliyun.com
```

其中apex、torch、deepspeed需要到[开发者社区](https://cancon.hpccube.com:65024/4/main/)下载对应版本

hepj987's avatar
hepj987 committed
49
## 数据集
hepj987's avatar
hepj987 committed
50
51

```
hepj987's avatar
hepj987 committed
52
53
54
55
56
57
58
59
使用alpaca_gpt4_zh数据集,已经包含在data目录中,具体文件为alpaca_gpt4_data_zh.json
```

```
#数据集树目录
data
├── alpaca_gpt4_data_en.json
└── alpaca_gpt4_data_zh.json
hepj987's avatar
hepj987 committed
60
61
```

hepj987's avatar
hepj987 committed
62
63
64
65
66
67
68
69
70


## 模型下载

[Qwen模型下载](https://huggingface.co/Qwen/Qwen-7B-Chat/tree/main)

## Qwen训练

### 训练(单节点)
hepj987's avatar
hepj987 committed
71
72

```
hepj987's avatar
hepj987 committed
73
bash run-node.sh
hepj987's avatar
hepj987 committed
74
75
```

hepj987's avatar
hepj987 committed
76
### 训练(多节点)
hepj987's avatar
hepj987 committed
77

hepj987's avatar
hepj987 committed
78
79
80
81
```
#需要修改对应的节点名、加载对应虚拟环境以及模型路径等,修改hostfile为自己所用的节点
sh mpirun-nodes.sh
```
hepj987's avatar
hepj987 committed
82

hepj987's avatar
hepj987 committed
83
## result
hepj987's avatar
hepj987 committed
84

hepj987's avatar
hepj987 committed
85
乌镇集群两节点八卡zero3训练
hepj987's avatar
hepj987 committed
86
87
88
89
90

|         train         |  loss  |
| :-------------------: | :----: |
| 1.44epoch(8780step) | 1.3917 |

hepj987's avatar
hepj987 committed
91
92
93
94
95
96
97
## 应用场景

### 算法类别

`自然语言处理`

### 热点应用行业
hepj987's avatar
hepj987 committed
98

hepj987's avatar
hepj987 committed
99
`nlp,智能聊天助手`
hepj987's avatar
hepj987 committed
100
101
102
103
104
105
106
107

## 源码仓库及问题反馈

https://developer.hpccube.com/codes/modelzoo/qwen-torch

## 参考

https://github.com/hiyouga/LLaMA-Efficient-Tuning/tree/main