"README-zh-Hans.md" did not exist on "578ea0583be9b29ddc6ccb4a69ca5f4fbf215346"
README.md 4.56 KB
Newer Older
1
# Colossal-AI
2

3
[![logo](./docs/images/Colossal-AI_logo.png)](https://www.colossalai.org/)
4

5
6
<div align="center">
   <h3> <a href="https://arxiv.org/abs/2110.14883"> Paper </a> | <a href="https://www.colossalai.org/"> Documentation </a> | <a href="https://github.com/hpcaitech/ColossalAI/discussions"> Forum </a> | <a href="https://medium.com/@hpcaitech"> Blog </a></h3>
7
   
8
   [![Build](https://github.com/hpcaitech/ColossalAI/actions/workflows/PR_CI.yml/badge.svg)](https://github.com/hpcaitech/ColossalAI/actions/workflows/PR_CI.yml)
9
   [![Documentation](https://readthedocs.org/projects/colossalai/badge/?version=latest)](https://colossalai.readthedocs.io/en/latest/?badge=latest)
10
</div>
ver217's avatar
ver217 committed
11
12
An integrated large-scale model training system with efficient parallelization techniques.

zbian's avatar
zbian committed
13
14
## Installation

15
16
17
### Install From Source (Recommended)

> We **recommend** you to install from source as the Colossal-AI is updating frequently in the early versions. The documentation will be in line with the main branch of the repository. Feel free to raise an issue if you encounter any problem. :)
zbian's avatar
zbian committed
18
19

```shell
20
git clone https://github.com/hpcaitech/ColossalAI.git
zbian's avatar
zbian committed
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cd ColossalAI
# install dependency
pip install -r requirements/requirements.txt

# install colossalai
pip install .
```

Install and enable CUDA kernel fusion (compulsory installation when using fused optimizer)

```shell
pip install -v --no-cache-dir --global-option="--cuda_ext" .
```

Frank Lee's avatar
Frank Lee committed
35
36
37
38
39
40
41
42
43
44
45
### PyPI

```bash
pip install colossalai
```

## Documentation

- [Documentation](https://www.colossalai.org/)


Frank Lee's avatar
Frank Lee committed
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
## Use Docker

Run the following command to build a docker image from Dockerfile provided.

```bash
cd ColossalAI
docker build -t colossalai ./docker
```

Run the following command to start the docker container in interactive mode.

```bash
docker run -ti --gpus all --rm --ipc=host colossalai bash
```

zbian's avatar
zbian committed
61
62
63
64
65
66
## Quick View

### Start Distributed Training in Lines

```python
import colossalai
67
68
69
70
71
72
73
74
75
76
77
78
79
from colossalai.utils import get_dataloader


# my_config can be path to config file or a dictionary obj
# 'localhost' is only for single node, you need to specify
# the node name if using multiple nodes
colossalai.launch(
    config=my_config,
    rank=rank,
    world_size=world_size,
    backend='nccl',
    port=29500,
    host='localhost'
zbian's avatar
zbian committed
80
)
81
82

# build your model
83
model = ...
84

85
# build you dataset, the dataloader will have distributed data
86
# sampler by default
87
train_dataset = ...
88
train_dataloader = get_dataloader(dataset=dataset,
89
                                shuffle=True
90
                                )
91
92


93
94
# build your
optimizer = ...
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116

# build your loss function
criterion = ...

# build your lr_scheduler
engine, train_dataloader, _, _ = colossalai.initialize(
    model=model,
    optimizer=optimizer,
    criterion=criterion,
    train_dataloader=train_dataloader
)

# start training
engine.train()
for epoch in range(NUM_EPOCHS):
    for data, label in train_dataloader:
        engine.zero_grad()
        output = engine(data)
        loss = engine.criterion(output, label)
        engine.backward(loss)
        engine.step()

zbian's avatar
zbian committed
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
```

### Write a Simple 2D Parallel Model

Let's say we have a huge MLP model and its very large hidden size makes it difficult to fit into a single GPU. We can
then distribute the model weights across GPUs in a 2D mesh while you still write your model in a familiar way.

```python
from colossalai.nn import Linear2D
import torch.nn as nn


class MLP_2D(nn.Module):

    def __init__(self):
        super().__init__()
        self.linear_1 = Linear2D(in_features=1024, out_features=16384)
        self.linear_2 = Linear2D(in_features=16384, out_features=1024)

    def forward(self, x):
        x = self.linear_1(x)
        x = self.linear_2(x)
        return x

```

## Features

145
Colossal-AI provides a collection of parallel training components for you. We aim to support you to write your
zbian's avatar
zbian committed
146
147
148
distributed deep learning models just like how you write your single-GPU model. We provide friendly tools to kickstart
distributed training in a few lines.

149
150
151
152
153
154
155
156
157
- Data Parallelism
- Pipeline Parallelism
- 1D, 2D, 2.5D, 3D and sequence parallelism
- Friendly trainer and engine
- Extensible for new parallelism
- Mixed Precision Training
- Zero Redundancy Optimizer (ZeRO)

Please visit our [documentation and tutorials](https://www.colossalai.org/) for more details.
zbian's avatar
zbian committed
158

159
## Cite Us
zbian's avatar
zbian committed
160

161
162
163
164
165
166
167
168
```
@article{bian2021colossal,
  title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
  author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
  journal={arXiv preprint arXiv:2110.14883},
  year={2021}
}
```