README.md 4.68 KB
Newer Older
xiabo's avatar
xiabo committed
1
# InternLM
xiabo's avatar
xiabo committed
2
## 论文
xiabo's avatar
xiabo committed
3

xiabo's avatar
xiabo committed
4

xiabo's avatar
xiabo committed
5
## 模型结构
xiabo's avatar
xiabo committed
6
InternLM 是一个开源的轻量级训练框架,旨在支持模型预训练,而不需要大量的依赖。InternLM深度整合了Flash-attention,Apex等高性能模型算子,提高了训练效率。通过架构Hybrid Zero技术,实现计算和通信的高效重叠,大幅降低了训练过程中的跨节点通信流量。
xiabo's avatar
xiabo committed
7

xiabo's avatar
xiabo committed
8
![img](./docs/interlm.jpg)
xiabo's avatar
xiabo committed
9
10

## 算法原理
xiabo's avatar
xiabo committed
11
InterLM是一个基础语言模型的集合,参数范围从7B到20B。在数万亿的tokens上训练出的模型,并表明可以专门使用公开可用的数据集来训练最先进的模型,而不依赖于专有的和不可访问的数据集。
xiabo's avatar
xiabo committed
12

xiabo's avatar
xiabo committed
13
![img](./docs/interlm.png)
xiabo's avatar
xiabo committed
14
15
16

## 环境配置

xiabo's avatar
xiabo committed
17
提供光源拉取推理的docker镜像:
xiabo's avatar
xiabo committed
18
19
20
21
22
```
docker pull image.sourcefind.cn:5000/dcu/admin/base/custom:lmdeploy0.0.13_dtk23.04_torch1.13_py38
# <Image ID>用上面拉取docker镜像的ID替换
# <Host Path>主机端路径
# <Container Path>容器映射路径
xiabo's avatar
xiabo committed
23
docker run -it --name interlm --shm-size=1024G  --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v <Host Path>:<Container Path> <Image ID> /bin/bash
xiabo's avatar
xiabo committed
24
25
26
27
28
29
30
31
32
```
镜像版本依赖:
* DTK驱动:dtk23.04
* Pytorch: 1.13
* python: python3.8

## 数据集


xiabo's avatar
xiabo committed
33
34
## 推理

xiabo's avatar
xiabo committed
35
36
37
38
39
40
### 源码编译安装
```
# 若使用光源的镜像,可以跳过源码编译安装,镜像里面安装好了lmdeploy。
git clone http://developer.hpccube.com/codes/modelzoo/llama_lmdeploy.git
cd llama_lmdeploy
git submodule init && git submodule update
xiabo's avatar
xiabo committed
41
cd lmdeploy
xiabo's avatar
xiabo committed
42
43
44
45
46
47
48
mkdir build && cd build
sh ../generate.sh
make -j 32
make install
cd .. && python3 setup.py install
```

xiabo's avatar
xiabo committed
49
### 运行 internlm-chat-7
xiabo's avatar
xiabo committed
50
51
52
53
54
```
# 模型转换
# <model_name> 模型的名字 ('llama', 'internlm', 'vicuna', 'internlm-chat-7b', 'internlm-chat', 'internlm-chat-7b-8k', 'internlm-chat-20b', 'internlm-20b', 'baichuan-7b', 'baichuan2-7b', 'llama2', 'qwen-7b', 'qwen-14b')
# <model_path> 模型路径
# <model_format> 模型的格式 ('llama', 'hf', 'qwen')
xiabo's avatar
xiabo committed
55
# <tokenizer_path> tokenizer模型的路径(默认None,会去model_path里面找tokenizer.model)
xiabo's avatar
xiabo committed
56
57
58
# <model_format> 保存输出的目标路径(默认./workspace)
# <tp> 用于张量并行的GPU数量应该是2^n

xiabo's avatar
xiabo committed
59
lmdeploy convert --model_name internlm-chat-7b --model_path /path/to/model --model_format hf --tokenizer_path None --dst_path ./workspace_interlm7b --tp 1
xiabo's avatar
xiabo committed
60
61

# bash界面运行
xiabo's avatar
xiabo committed
62
lmdeploy chat turbomind --model_path ./workspace_interlm7b --tp 1     # 输入问题后执行2次回车进行推理
xiabo's avatar
xiabo committed
63
64
65
66
67
68
69
70
71
72
73

# 服务器网页端运行

在bash端运行:
# <model_path_or_server> 部署模型的路径或tritonserver URL或restful api URL。前者用于与gradio直接运行服务。后者用于默认情况下使用tritonserver运行。如果输入URL是restful api。请启用另一个标志“restful_api”。
# <server_name> gradio服务器的ip地址
# <server_port> gradio服务器的ip的端口
# <batch_size> 于直接运行Turbomind的batch大小 (默认32)
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)
# <restful_api> modelpath_or_server的标志(默认是False)

xiabo's avatar
xiabo committed
74
lmdeploy serve gradio --model_path_or_server ./workspace_interlm7b --server_name {ip} --server_port {pord} --batch_size 32 --tp 1 --restful_api False 
xiabo's avatar
xiabo committed
75
76
77
78

在网页上输入{ip}:{pord}即可进行对话
```

xiabo's avatar
xiabo committed
79
### 运行 internlm-chat-20b
xiabo's avatar
xiabo committed
80
81
```
# 模型转换
xiabo's avatar
xiabo committed
82
lmdeploy convert --model_name internlm-chat-20b --model_path /path/to/model --model_format hf --tokenizer_path None --dst_path ./workspace_interlm20b --tp 4
xiabo's avatar
xiabo committed
83
84

# bash界面运行
xiabo's avatar
xiabo committed
85
lmdeploy chat turbomind --model_path ./workspace_interlm20b --tp 4
xiabo's avatar
xiabo committed
86
87
88
89

# 服务器网页端运行

在bash端运行:
xiabo's avatar
xiabo committed
90
lmdeploy serve gradio --model_path_or_server ./workspace_interlm20b --server_name {ip} --server_port {pord} --batch_size 32 --tp 4 --restful_api False 
xiabo's avatar
xiabo committed
91
92
93
94
95
96

在网页上输入{ip}:{pord}即可进行对话
```


## result
xiabo's avatar
xiabo committed
97
![interlm](docs/interlm.gif)
xiabo's avatar
xiabo committed
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114

### 精度



## 应用场景

### 算法类别

`对话问答`


### 热点应用行业

`金融,科研,教育`


wanglch's avatar
wanglch committed
115
116
117
## 预训练权重


wanglch's avatar
wanglch committed
118
[internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b)
wanglch's avatar
wanglch committed
119

wanglch's avatar
wanglch committed
120
[internlm-chat-20b](https://huggingface.co/internlm/internlm-chat-20b)
wanglch's avatar
wanglch committed
121
122
123
124
125
126
127


预训练权重快速下载中心:[SCNet AIModels](http://113.200.138.88:18080/aimodels)

项目中的预训练权重可从快速下载通道下载: [internlm-chat-7b](http://113.200.138.88:18080/aimodels/internlm-chat-7b)
                                     [internlm-chat-20b](http://113.200.138.88:18080/aimodels/internlm-chat-20b)

xiabo's avatar
xiabo committed
128
## 源码仓库及问题反馈
xiabo's avatar
xiabo committed
129
https://developer.hpccube.com/codes/modelzoo/internlm_lmdeploy
xiabo's avatar
xiabo committed
130
131
132

## 参考资料
https://github.com/InternLM/LMDeploy