README.md 3.25 KB
Newer Older
xiabo's avatar
xiabo committed
1
# InternLM
xiabo's avatar
xiabo committed
2
## 论文
xiabo's avatar
xiabo committed
3

xiabo's avatar
xiabo committed
4

xiabo's avatar
xiabo committed
5
## 模型结构
xiabo's avatar
xiabo committed
6
InternLM 是一个开源的轻量级训练框架,旨在支持模型预训练,而不需要大量的依赖。InternLM深度整合了Flash-attention,Apex等高性能模型算子,提高了训练效率。通过架构Hybrid Zero技术,实现计算和通信的高效重叠,大幅降低了训练过程中的跨节点通信流量。
xiabo's avatar
xiabo committed
7

xiabo's avatar
xiabo committed
8
![img](./docs/interlm.jpg)
xiabo's avatar
xiabo committed
9
10

## 算法原理
xiabo's avatar
xiabo committed
11
InterLM是一个基础语言模型的集合,参数范围从7B到20B。在数万亿的tokens上训练出的模型,并表明可以专门使用公开可用的数据集来训练最先进的模型,而不依赖于专有的和不可访问的数据集。
xiabo's avatar
xiabo committed
12

xiabo's avatar
xiabo committed
13
![img](./docs/interlm.png)
xiabo's avatar
xiabo committed
14
15
16

## 环境配置

xuxzh1's avatar
xuxzh1 committed
17
18
19
20
21
22
23
* 提供光源拉取推理的docker镜像:

  ```bash
  docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.1.0-ubuntu20.04-dtk24.04.1-py3.10(推荐)
  # <Image ID>用上面拉取docker镜像的ID替换
  # <Host Path>主机端路径
  # <Container Path>容器映射路径
shantf's avatar
shantf committed
24
  docker run -it --name baichuan --shm-size=1024G  --device=/dev/kfd --device=/dev/dri/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v /opt/hyhal:/opt/hyhal:ro --ulimit memlock=-1:-1 --ipc=host --network host --group-add video -v <Host Path>:<Container Path> <Image ID> /bin/bash
xuxzh1's avatar
xuxzh1 committed
25
26
27
28
29
30
31
32
  ```

  镜像版本依赖:

  * DTK驱动:24.04.1
  * Pytorch: 2.1.0
  * python: python3.10

xiabo's avatar
xiabo committed
33
34
35
36

## 数据集


xiabo's avatar
xiabo committed
37
38
## 推理

xiabo's avatar
xiabo committed
39
### 源码编译安装
xuxzh1's avatar
xuxzh1 committed
40
```bash
xiabo's avatar
xiabo committed
41
42
43
44
# 若使用光源的镜像,可以跳过源码编译安装,镜像里面安装好了lmdeploy。
git clone http://developer.hpccube.com/codes/modelzoo/llama_lmdeploy.git
cd llama_lmdeploy
git submodule init && git submodule update
xiabo's avatar
xiabo committed
45
cd lmdeploy
xiabo's avatar
xiabo committed
46
47
48
49
50
51
52
mkdir build && cd build
sh ../generate.sh
make -j 32
make install
cd .. && python3 setup.py install
```

xuxzh1's avatar
update  
xuxzh1 committed
53
54
55
56
57
58
59
60
61
62
### 运行前

```bash
#step 1
cd lmdeploy
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
#step 2
source /opt/dtk/cuda/env.sh
```

dcuai's avatar
dcuai committed
63
### 运行 internlm-chat-7b
xuxzh1's avatar
update  
xuxzh1 committed
64

xuxzh1's avatar
xuxzh1 committed
65
```bash
xiabo's avatar
xiabo committed
66
67
68
69
# 模型转换
# <tp> 用于张量并行的GPU数量应该是2^n

# bash界面运行
shantf's avatar
shantf committed
70
lmdeploy chat turbomind  ./path_to_interlm7b --tp 1     # 输入问题后执行2次回车进行推理
xiabo's avatar
xiabo committed
71
72
73
74

# 服务器网页端运行

在bash端运行:
shantf's avatar
shantf committed
75
76
# <server-name> gradio服务器的ip地址
# <server-port> gradio服务器的ip的端口
xiabo's avatar
xiabo committed
77
78
# <tp> 用于张量并行的GPU数量应该是2^n (和模型转换的时候保持一致)

shantf's avatar
shantf committed
79
lmdeploy serve gradio  ./path_to_interlm7b --server-name {ip} --server-port {port}
xiabo's avatar
xiabo committed
80

xuxzh1's avatar
xuxzh1 committed
81
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
82
83
```

xiabo's avatar
xiabo committed
84
### 运行 internlm-chat-20b
xuxzh1's avatar
xuxzh1 committed
85
```bash
xiabo's avatar
xiabo committed
86
# bash界面运行
shantf's avatar
shantf committed
87
lmdeploy chat turbomind  ./path_to_interlm20b --tp 4
xiabo's avatar
xiabo committed
88
89
90
91

# 服务器网页端运行

在bash端运行:
shantf's avatar
shantf committed
92
lmdeploy serve gradio ./path_to_interlm20b --server-name {ip} --server-port {port} --tp 4
xiabo's avatar
xiabo committed
93

xuxzh1's avatar
xuxzh1 committed
94
在网页上输入{ip}:{port}即可进行对话
xiabo's avatar
xiabo committed
95
96
97
98
```


## result
xiabo's avatar
xiabo committed
99
![interlm](docs/interlm.gif)
xiabo's avatar
xiabo committed
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116

### 精度



## 应用场景

### 算法类别

`对话问答`


### 热点应用行业

`金融,科研,教育`


wanglch's avatar
wanglch committed
117
118
## 预训练权重

wanglch's avatar
wanglch committed
119
[internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b)
wanglch's avatar
wanglch committed
120

wanglch's avatar
wanglch committed
121
[internlm-chat-20b](https://huggingface.co/internlm/internlm-chat-20b)
wanglch's avatar
wanglch committed
122

xiabo's avatar
xiabo committed
123
## 源码仓库及问题反馈
xiabo's avatar
xiabo committed
124
https://developer.hpccube.com/codes/modelzoo/internlm_lmdeploy
xiabo's avatar
xiabo committed
125
126
127

## 参考资料
https://github.com/InternLM/LMDeploy