README.md 3.71 KB
Newer Older
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
1
2
3
4
5
# fairscale
fairscale is a PyTorch extension library for high performance and large scale training.

fairscale supports:
* pipeline parallelism (fairscale.nn.Pipe)
Tom Birch's avatar
Tom Birch committed
6
* tensor parallelism (fairscale.nn.model_parallel)
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
7
8
9
* optimizer state sharding (fairscale.optim.oss)

## Examples
10
### Pipe
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
11
12
13

Run a 4-layer model on 2 GPUs. The first two layers run on cuda:0 and the next two layers run on cuda:1.

14
```python
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
15
16
17
18
19
20
21
22
import torch

import fairscale

model = torch.nn.Sequential(a, b, c, d)
model = fairscale.nn.Pipe(model, balance=[2, 2], devices=[0, 1], chunks=8)
```

23
### Optimizer state sharding (ZeRO)
24
See a more complete example [here](https://github.com/facebookresearch/fairscale/blob/master/benchmarks/oss.py), but a minimal example could look like the following :
25

26
```python
27
import torch
28
import torch.multiprocessing as mp
29
30
31
32
33
34
35
36
37
38
39
40
41
from fairscale.optim.oss import OSS

def train(
    rank: int,
    world_size: int,
    epochs: int):

    # DDP
    dist_init(rank, world_size)

    # Problem statement
    model = myAwesomeModel()
    dataloader = mySuperFastDataloader()
Benjamin Lefaudeux's avatar
Benjamin Lefaudeux committed
42
    loss_fn = myVeryRelevantLoss()
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
    base_optimizer = torch.optim.SGD # pick any pytorch compliant optimizer here
    base_optimizer_arguments = {} # pass any optimizer specific arguments here, or directly below when instantiating OSS

    optimizer = OSS(params=model.parameters(), optim=base_optimizer, **base_optimizer_arguments)

    # Any relevant training loop, nothing specific to OSS. For example:
    model.train()
    for e in range(epochs):
        for batch in dataloader:
            # Train
            model.zero_grad()
            outputs = model(batch["inputs"])
            loss = loss_fn(outputs, batch["label"])
            torch.distributed.all_reduce(loss, op=torch.distributed.ReduceOp.SUM)
            loss /= world_size
            loss.backward()
            optimizer.step()

if __name__ == "__main__":
62
    # Supposing that WORLD_SIZE and EPOCHS are somehow defined somewhere
63
64
65
66
67
68
69
70
71
72
73
74
    mp.spawn(
        train,
        args=(
            WORLD_SIZE,
            EPOCHS,
        ),
        nprocs=WORLD_SIZE,
        join=True,
    )
```


Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
75
76
77
78
79
80
81
82
## Requirements

* PyTorch >= 1.4

## Installation

Normal installation:
```bash
83
pip install fairscale
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
84
85
86
87
```

Development mode:
```bash
88
89
cd fairscale
pip install -r requirements.txt
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
90
91
92
pip install -e .
```

93
94
95
96
# Testing

We use circleci to test on PyTorch versions 1.5.1 and 1.6.0 and CUDA version 10.1. Please create an [issue](https://github.com/facebookresearch/fairscale/issues) if you are having trouble with installation.

Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
97
98
99
100
101
102
103
## Contributors

See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out.

## License

fairscale is licensed under the [BSD-3-Clause License](LICENSE).
104
105
106
107
108
109
110
111
112
113
114
115

fairscale.nn.pipe is forked from [torchgpipe](https://github.com/kakaobrain/torchgpipe), Copyright 2019, Kakao Brain, licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

fairscale.nn.model_parallel is forked from [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), Copyright 2020, NVIDIA CORPORATION, licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

## References

Here is a list of all authors on relevant research papers this work is based on:

* torchgpipe: Chiheon Kim, Heungsub Lee, Myungryong Jeong, Woonhyuk Baek, Boogeon Yoon, Ildoo Kim, Sungbin Lim, Sungwoong Kim. [[Paper](https://arxiv.org/pdf/2004.09910.pdf)] [[Code](https://github.com/kakaobrain/torchgpipe)]
* ZeRO: Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He. [[Paper](https://arxiv.org/pdf/1910.02054.pdf)] [[Code](https://github.com/microsoft/DeepSpeed)]
* Megatron-LM: Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, Bryan Catanzaro. [[Paper](https://arxiv.org/pdf/1909.08053.pdf)][[Code](https://github.com/NVIDIA/Megatron-LM)]