README.md 9.59 KB
Newer Older
1
# Colossal-AI
2
<div id="top" align="center">
3

4
5
6
   [![logo](https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/Colossal-AI_logo.png)](https://www.colossalai.org/)

   An integrated large-scale model training system with efficient parallelization techniques.
7

8
9
10
11
   <h3> <a href="https://arxiv.org/abs/2110.14883"> Paper </a> | 
   <a href="https://www.colossalai.org/"> Documentation </a> | 
   <a href="https://github.com/hpcaitech/ColossalAI-Examples"> Examples </a> |   
   <a href="https://github.com/hpcaitech/ColossalAI/discussions"> Forum </a> | 
12
   <a href="https://medium.com/@hpcaitech"> Blog </a></h3>
13

Frank Lee's avatar
Frank Lee committed
14
   [![Build](https://github.com/hpcaitech/ColossalAI/actions/workflows/build.yml/badge.svg)](https://github.com/hpcaitech/ColossalAI/actions/workflows/build.yml)
15
   [![Documentation](https://readthedocs.org/projects/colossalai/badge/?version=latest)](https://colossalai.readthedocs.io/en/latest/?badge=latest)
16
   [![CodeFactor](https://www.codefactor.io/repository/github/hpcaitech/colossalai/badge)](https://www.codefactor.io/repository/github/hpcaitech/colossalai)
Frank Lee's avatar
Frank Lee committed
17
   [![HuggingFace badge](https://img.shields.io/badge/%F0%9F%A4%97HuggingFace-Join-yellow)](https://huggingface.co/hpcai-tech)
binmakeswell's avatar
binmakeswell committed
18
   [![slack badge](https://img.shields.io/badge/Slack-join-blueviolet?logo=slack&amp)](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w)
19
   [![WeChat badge](https://img.shields.io/badge/微信-加入-green?logo=wechat&amp)](https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/WeChat.png)
Frank Lee's avatar
Frank Lee committed
20
   
binmakeswell's avatar
binmakeswell committed
21
22

   | [English](README.md) | [中文](README-zh-Hans.md) |
23

24
</div>
ver217's avatar
ver217 committed
25

26
27
## Table of Contents
<ul>
binmakeswell's avatar
binmakeswell committed
28
 <li><a href="#Why-Colossal-AI">Why Colossal-AI</a> </li>
29
30
31
32
33
34
35
36
 <li><a href="#Features">Features</a> </li>
 <li>
   <a href="#Demo">Demo</a> 
   <ul>
     <li><a href="#ViT">ViT</a></li>
     <li><a href="#GPT-3">GPT-3</a></li>
     <li><a href="#GPT-2">GPT-2</a></li>
     <li><a href="#BERT">BERT</a></li>
binmakeswell's avatar
binmakeswell committed
37
     <li><a href="#PaLM">PaLM</a></li>
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
   </ul>
 </li>

 <li>
   <a href="#Installation">Installation</a>
   <ul>
     <li><a href="#PyPI">PyPI</a></li>
     <li><a href="#Install-From-Source">Install From Source</a></li>
   </ul>
 </li>
 <li><a href="#Use-Docker">Use Docker</a></li>
 <li><a href="#Community">Community</a></li>
 <li><a href="#contributing">Contributing</a></li>
 <li><a href="#Quick-View">Quick View</a></li>
   <ul>
     <li><a href="#Start-Distributed-Training-in-Lines">Start Distributed Training in Lines</a></li>
     <li><a href="#Write-a-Simple-2D-Parallel-Model">Write a Simple 2D Parallel Model</a></li>
   </ul>
 <li><a href="#Cite-Us">Cite Us</a></li>
</ul>
binmakeswell's avatar
binmakeswell committed
58

binmakeswell's avatar
binmakeswell committed
59
60
61
62
63
64
65
66
67
68
69
## Why Colossal-AI
<div align="center">
   <a href="https://youtu.be/KnXSfjqkKN0">
   <img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/JamesDemmel_Colossal-AI.png" width="600" />
   </a>

   Prof. James Demmel (UC Berkeley): Colossal-AI makes distributed training efficient, easy and scalable.
</div>

<p align="right">(<a href="#top">back to top</a>)</p>

binmakeswell's avatar
binmakeswell committed
70
71
72
## Features

Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your
fastalgo's avatar
fastalgo committed
73
distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart
binmakeswell's avatar
binmakeswell committed
74
75
distributed training in a few lines.

Jiarui Fang's avatar
Jiarui Fang committed
76
77
78
- Parallelism strategies
  - Data Parallelism
  - Pipeline Parallelism
binmakeswell's avatar
binmakeswell committed
79
80
  - 1D, [2D](https://arxiv.org/abs/2104.05343), [2.5D](https://arxiv.org/abs/2105.14500), [3D](https://arxiv.org/abs/2105.14450) Tensor Parallelism
  - [Sequence Parallelism](https://arxiv.org/abs/2105.13120)
Jiarui Fang's avatar
Jiarui Fang committed
81
82
83
84
85
86
  - [Zero Redundancy Optimizer (ZeRO)](https://arxiv.org/abs/2108.05818)

- Heterogeneous Memory Menagement 
  - [PatrickStar](https://arxiv.org/abs/2108.05818)

- Friendly Usage
binmakeswell's avatar
binmakeswell committed
87
  - Parallelism based on configuration file
binmakeswell's avatar
binmakeswell committed
88

89
90
91
<p align="right">(<a href="#top">back to top</a>)</p>

## Demo
binmakeswell's avatar
binmakeswell committed
92
### ViT
Jiarui Fang's avatar
Jiarui Fang committed
93
<p align="center">
Shen Chenhui's avatar
Shen Chenhui committed
94
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/ViT.png" width="450" />
Jiarui Fang's avatar
Jiarui Fang committed
95
</p>
binmakeswell's avatar
binmakeswell committed
96

fastalgo's avatar
fastalgo committed
97
- 14x larger batch size, and 5x faster training for Tensor Parallelism = 64
binmakeswell's avatar
binmakeswell committed
98

99
### GPT-3
Jiarui Fang's avatar
Jiarui Fang committed
100
<p align="center">
Shen Chenhui's avatar
Shen Chenhui committed
101
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/GPT3.png" width=700/>
Jiarui Fang's avatar
Jiarui Fang committed
102
</p>
binmakeswell's avatar
binmakeswell committed
103

fastalgo's avatar
fastalgo committed
104
- Save 50% GPU resources, and 10.7% acceleration
105
106

### GPT-2
Shen Chenhui's avatar
Shen Chenhui committed
107
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/GPT2.png" width=800/>
108

fastalgo's avatar
fastalgo committed
109
- 11x lower GPU memory consumption, and superlinear scaling efficiency with Tensor Parallelism
110

Sze-qq's avatar
Sze-qq committed
111
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/(updated)GPT-2.png" width=800>
112

Sze-qq's avatar
Sze-qq committed
113
114
- 24x larger model size on the same hardware
- over 3x acceleration
binmakeswell's avatar
binmakeswell committed
115
### BERT
Shen Chenhui's avatar
Shen Chenhui committed
116
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/BERT.png" width=800/>
binmakeswell's avatar
binmakeswell committed
117

118
- 2x faster training, or 50% longer sequence length
binmakeswell's avatar
binmakeswell committed
119

binmakeswell's avatar
binmakeswell committed
120
121
122
### PaLM
- [PaLM-colossalai](https://github.com/hpcaitech/PaLM-colossalai): Scalable implementation of Google's Pathways Language Model ([PaLM](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html)).

binmakeswell's avatar
binmakeswell committed
123
124
Please visit our [documentation and tutorials](https://www.colossalai.org/) for more details.

125
<p align="right">(<a href="#top">back to top</a>)</p>
binmakeswell's avatar
binmakeswell committed
126

zbian's avatar
zbian committed
127
128
## Installation

ver217's avatar
ver217 committed
129
130
131
132
133
134
### PyPI

```bash
pip install colossalai
```
This command will install CUDA extension if your have installed CUDA, NVCC and torch. 
135

ver217's avatar
ver217 committed
136
137
138
139
140
141
142
If you don't want to install CUDA extension, you should add `--global-option="--no_cuda_ext"`, like:
```bash
pip install colossalai --global-option="--no_cuda_ext"
```

### Install From Source

fastalgo's avatar
fastalgo committed
143
> The version of Colossal-AI will be in line with the main branch of the repository. Feel free to create an issue if you encounter any problems. :-)
zbian's avatar
zbian committed
144
145

```shell
146
git clone https://github.com/hpcaitech/ColossalAI.git
zbian's avatar
zbian committed
147
148
149
150
151
152
153
154
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt

# install colossalai
pip install .
```

ver217's avatar
ver217 committed
155
If you don't want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):
zbian's avatar
zbian committed
156
157

```shell
ver217's avatar
ver217 committed
158
pip install --global-option="--no_cuda_ext" .
zbian's avatar
zbian committed
159
160
```

161
<p align="right">(<a href="#top">back to top</a>)</p>
binmakeswell's avatar
binmakeswell committed
162

Frank Lee's avatar
Frank Lee committed
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
## Use Docker

Run the following command to build a docker image from Dockerfile provided.

```bash
cd ColossalAI
docker build -t colossalai ./docker
```

Run the following command to start the docker container in interactive mode.

```bash
docker run -ti --gpus all --rm --ipc=host colossalai bash
```

178
<p align="right">(<a href="#top">back to top</a>)</p>
binmakeswell's avatar
binmakeswell committed
179
180
181
182
183

## Community

Join the Colossal-AI community on [Forum](https://github.com/hpcaitech/ColossalAI/discussions),
[Slack](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w),
fastalgo's avatar
fastalgo committed
184
and [WeChat](https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/WeChat.png "qrcode") to share your suggestions, feedback, and questions with our engineering team.
binmakeswell's avatar
binmakeswell committed
185

186
187
## Contributing

binmakeswell's avatar
binmakeswell committed
188
189
190
If you wish to contribute to this project, please follow the guideline in [Contributing](./CONTRIBUTING.md).

Thanks so much to all of our amazing contributors!
191

binmakeswell's avatar
binmakeswell committed
192
193
194
<a href="https://github.com/hpcaitech/ColossalAI/graphs/contributors"><img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/contributor_avatar.png" width="800px"></a>

*The order of contributor avatars is randomly shuffled.*
195

196
197
<p align="right">(<a href="#top">back to top</a>)</p>

zbian's avatar
zbian committed
198
199
200
201
202
203
## Quick View

### Start Distributed Training in Lines

```python
import colossalai
204
205
206
207
208
209
210
211
212
213
214
215
216
from colossalai.utils import get_dataloader


# my_config can be path to config file or a dictionary obj
# 'localhost' is only for single node, you need to specify
# the node name if using multiple nodes
colossalai.launch(
    config=my_config,
    rank=rank,
    world_size=world_size,
    backend='nccl',
    port=29500,
    host='localhost'
zbian's avatar
zbian committed
217
)
218
219

# build your model
220
model = ...
221

222
# build you dataset, the dataloader will have distributed data
223
# sampler by default
224
train_dataset = ...
225
train_dataloader = get_dataloader(dataset=dataset,
226
                                shuffle=True
227
                                )
228
229


binmakeswell's avatar
binmakeswell committed
230
# build your optimizer
231
optimizer = ...
232
233
234
235

# build your loss function
criterion = ...

binmakeswell's avatar
binmakeswell committed
236
# initialize colossalai
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
engine, train_dataloader, _, _ = colossalai.initialize(
    model=model,
    optimizer=optimizer,
    criterion=criterion,
    train_dataloader=train_dataloader
)

# start training
engine.train()
for epoch in range(NUM_EPOCHS):
    for data, label in train_dataloader:
        engine.zero_grad()
        output = engine(data)
        loss = engine.criterion(output, label)
        engine.backward(loss)
        engine.step()

zbian's avatar
zbian committed
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
```

### Write a Simple 2D Parallel Model

Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can
then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.

```python
from colossalai.nn import Linear2D
import torch.nn as nn


class MLP_2D(nn.Module):

    def __init__(self):
        super().__init__()
        self.linear_1 = Linear2D(in_features=1024, out_features=16384)
        self.linear_2 = Linear2D(in_features=16384, out_features=1024)

    def forward(self, x):
        x = self.linear_1(x)
        x = self.linear_2(x)
        return x

```

280
<p align="right">(<a href="#top">back to top</a>)</p>
zbian's avatar
zbian committed
281

282
## Cite Us
zbian's avatar
zbian committed
283

284
285
286
287
288
289
290
291
```
@article{bian2021colossal,
  title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
  author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
  journal={arXiv preprint arXiv:2110.14883},
  year={2021}
}
```
292

Jie Zhu's avatar
Jie Zhu committed
293
<p align="right">(<a href="#top">back to top</a>)</p>