README.md 6.81 KB
Newer Older
1
# Colossal-AI
2

3
[![logo](./docs/images/Colossal-AI_logo.png)](https://www.colossalai.org/)
4

5
<div align="center">
6
7
8
9
   <h3> <a href="https://arxiv.org/abs/2110.14883"> Paper </a> | 
   <a href="https://www.colossalai.org/"> Documentation </a> | 
   <a href="https://github.com/hpcaitech/ColossalAI-Examples"> Examples </a> |   
   <a href="https://github.com/hpcaitech/ColossalAI/discussions"> Forum </a> | 
10
11
12
   <a href="https://medium.com/@hpcaitech"> Blog </a></h3> 
   <br/>

13
   [![Build](https://github.com/hpcaitech/ColossalAI/actions/workflows/PR_CI.yml/badge.svg)](https://github.com/hpcaitech/ColossalAI/actions/workflows/PR_CI.yml)
14
   [![Documentation](https://readthedocs.org/projects/colossalai/badge/?version=latest)](https://colossalai.readthedocs.io/en/latest/?badge=latest)
Frank Lee's avatar
Frank Lee committed
15
   [![codebeat badge](https://codebeat.co/badges/bfe8f98b-5d61-4256-8ad2-ccd34d9cc156)](https://codebeat.co/projects/github-com-hpcaitech-colossalai-main)
binmakeswell's avatar
binmakeswell committed
16
17
   [![slack badge](https://img.shields.io/badge/Slack-join-blueviolet?logo=slack&amp)](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w)
   [![WeChat badge](https://img.shields.io/badge/微信-加入-green?logo=wechat&amp)](./docs/images/WeChat.png)
binmakeswell's avatar
binmakeswell committed
18
19

   | [English](README.md) | [中文](README-zh-Hans.md) |
20
</div>
ver217's avatar
ver217 committed
21
22
An integrated large-scale model training system with efficient parallelization techniques.

binmakeswell's avatar
binmakeswell committed
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40

## Features

Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your
distributed deep learning models just like how you write your single-GPU model. We provide friendly tools to kickstart
distributed training in a few lines.

- Data Parallelism
- Pipeline Parallelism
- 1D, 2D, 2.5D, 3D tensor parallelism
- Sequence parallelism
- Friendly trainer and engine
- Extensible for new parallelism
- Mixed Precision Training
- Zero Redundancy Optimizer (ZeRO)

## Examples
### ViT
Shen Chenhui's avatar
Shen Chenhui committed
41
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/ViT.png" width="450" />
binmakeswell's avatar
binmakeswell committed
42

43
- 14x larger batch size, and 5x faster training for Tensor Parallel = 64
binmakeswell's avatar
binmakeswell committed
44

45
### GPT-3
Shen Chenhui's avatar
Shen Chenhui committed
46
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/GPT3.png" width=700/>
binmakeswell's avatar
binmakeswell committed
47

48
- Free 50% GPU resources, or 10.7% acceleration
49
50

### GPT-2
Shen Chenhui's avatar
Shen Chenhui committed
51
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/GPT2.png" width=800/>
52

53
- 11x lower GPU RAM, or superlinear scaling
54

binmakeswell's avatar
binmakeswell committed
55
### BERT
Shen Chenhui's avatar
Shen Chenhui committed
56
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/BERT.png" width=800/>
binmakeswell's avatar
binmakeswell committed
57

58
- 2x faster training, or 50% longer sequence length
binmakeswell's avatar
binmakeswell committed
59
60
61
62

Please visit our [documentation and tutorials](https://www.colossalai.org/) for more details.


zbian's avatar
zbian committed
63
64
## Installation

ver217's avatar
ver217 committed
65
66
67
68
69
70
### PyPI

```bash
pip install colossalai
```
This command will install CUDA extension if your have installed CUDA, NVCC and torch. 
71

ver217's avatar
ver217 committed
72
73
74
75
76
77
78
79
80
81
82
83
If you don't want to install CUDA extension, you should add `--global-option="--no_cuda_ext"`, like:
```bash
pip install colossalai --global-option="--no_cuda_ext"
```

If you want to use `ZeRO`, you can run:
```bash
pip install colossalai[zero]
```

### Install From Source

binmakeswell's avatar
binmakeswell committed
84
> The version of Colossal-AI will be in line with the main branch of the repository. Feel free to raise an issue if you encounter any problem. :)
zbian's avatar
zbian committed
85
86

```shell
87
git clone https://github.com/hpcaitech/ColossalAI.git
zbian's avatar
zbian committed
88
89
90
91
92
93
94
95
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt

# install colossalai
pip install .
```

ver217's avatar
ver217 committed
96
If you don't want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):
zbian's avatar
zbian committed
97
98

```shell
ver217's avatar
ver217 committed
99
pip install --global-option="--no_cuda_ext" .
zbian's avatar
zbian committed
100
101
```

binmakeswell's avatar
binmakeswell committed
102

Frank Lee's avatar
Frank Lee committed
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
## Use Docker

Run the following command to build a docker image from Dockerfile provided.

```bash
cd ColossalAI
docker build -t colossalai ./docker
```

Run the following command to start the docker container in interactive mode.

```bash
docker run -ti --gpus all --rm --ipc=host colossalai bash
```

binmakeswell's avatar
binmakeswell committed
118
119
120
121
122
123
124
125
126

## Community

Join the Colossal-AI community on [Forum](https://github.com/hpcaitech/ColossalAI/discussions),
[Slack](https://join.slack.com/t/colossalaiworkspace/shared_invite/zt-z7b26eeb-CBp7jouvu~r0~lcFzX832w),
and [WeChat](./docs/images/WeChat.png "qrcode") to share your suggestions, advice, and questions with our engineering team.



127
128
## Contributing

binmakeswell's avatar
binmakeswell committed
129
130
131
If you wish to contribute to this project, please follow the guideline in [Contributing](./CONTRIBUTING.md).

Thanks so much to all of our amazing contributors!
132

binmakeswell's avatar
binmakeswell committed
133
134
135
<a href="https://github.com/hpcaitech/ColossalAI/graphs/contributors"><img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/contributor_avatar.png" width="800px"></a>

*The order of contributor avatars is randomly shuffled.*
136

zbian's avatar
zbian committed
137
138
139
140
141
142
## Quick View

### Start Distributed Training in Lines

```python
import colossalai
143
144
145
146
147
148
149
150
151
152
153
154
155
from colossalai.utils import get_dataloader


# my_config can be path to config file or a dictionary obj
# 'localhost' is only for single node, you need to specify
# the node name if using multiple nodes
colossalai.launch(
    config=my_config,
    rank=rank,
    world_size=world_size,
    backend='nccl',
    port=29500,
    host='localhost'
zbian's avatar
zbian committed
156
)
157
158

# build your model
159
model = ...
160

161
# build you dataset, the dataloader will have distributed data
162
# sampler by default
163
train_dataset = ...
164
train_dataloader = get_dataloader(dataset=dataset,
165
                                shuffle=True
166
                                )
167
168


binmakeswell's avatar
binmakeswell committed
169
# build your optimizer
170
optimizer = ...
171
172
173
174

# build your loss function
criterion = ...

binmakeswell's avatar
binmakeswell committed
175
# initialize colossalai
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
engine, train_dataloader, _, _ = colossalai.initialize(
    model=model,
    optimizer=optimizer,
    criterion=criterion,
    train_dataloader=train_dataloader
)

# start training
engine.train()
for epoch in range(NUM_EPOCHS):
    for data, label in train_dataloader:
        engine.zero_grad()
        output = engine(data)
        loss = engine.criterion(output, label)
        engine.backward(loss)
        engine.step()

zbian's avatar
zbian committed
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
```

### Write a Simple 2D Parallel Model

Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can
then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.

```python
from colossalai.nn import Linear2D
import torch.nn as nn


class MLP_2D(nn.Module):

    def __init__(self):
        super().__init__()
        self.linear_1 = Linear2D(in_features=1024, out_features=16384)
        self.linear_2 = Linear2D(in_features=16384, out_features=1024)

    def forward(self, x):
        x = self.linear_1(x)
        x = self.linear_2(x)
        return x

```

219

zbian's avatar
zbian committed
220

221
## Cite Us
zbian's avatar
zbian committed
222

223
224
225
226
227
228
229
230
```
@article{bian2021colossal,
  title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
  author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
  journal={arXiv preprint arXiv:2110.14883},
  year={2021}
}
```