"vscode:/vscode.git/clone" did not exist on "f5843099d895e72c27ffa9d29cc91dd8df7f3832"
README.md 7.96 KB
Newer Older
Vittorio Caggiano's avatar
Vittorio Caggiano committed
1
2
![FairScale Logo](./docs/source/_static/img/fairscale-logo.png)

Vittorio Caggiano's avatar
Vittorio Caggiano committed
3
4
![PyPI](https://img.shields.io/pypi/v/fairscale)
[![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest)
5
[![CircleCI](https://circleci.com/gh/facebookresearch/fairscale.svg?style=shield)](https://app.circleci.com/pipelines/github/facebookresearch/fairscale/) ![PyPI - License](https://img.shields.io/pypi/l/fairscale) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/main/CONTRIBUTING.md)
Vittorio Caggiano's avatar
Vittorio Caggiano committed
6
--------------------------------------------------------------------------------
Vittorio Caggiano's avatar
Vittorio Caggiano committed
7
8

## Description
9
10
11
12
FairScale is a PyTorch extension library for high performance and large scale training.
This library extends basic PyTorch capabilities while adding new SOTA scaling techniques.
FairScale makes available the latest distributed training techniques in the form of composable
modules and easy to use APIs. These APIs are a fundamental part of a researcher's toolbox as
anj-s's avatar
anj-s committed
13
they attempt to scale models with limited resources.
Vittorio Caggiano's avatar
Vittorio Caggiano committed
14

anj-s's avatar
anj-s committed
15
FairScale was designed with the following values in mind:
Vittorio Caggiano's avatar
Vittorio Caggiano committed
16

anj-s's avatar
anj-s committed
17
18
* **Usability** -  Users should be able to understand and use FairScale APIs with minimum cognitive overload.

19
* **Modularity** - Users should be able to combine multiple FairScale APIs as part of their training loop seamlessly.
anj-s's avatar
anj-s committed
20
21

* **Performance** - FairScale APIs provide the best performance in terms of scaling and efficiency.
Vittorio Caggiano's avatar
Vittorio Caggiano committed
22

23
24
25
26
## What's New:

* September 2021 [`master` branch renamed to `main`](https://github.com/github/renaming).
* September 2021 [fairscale 0.4.1 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.1).
Vittorio Caggiano's avatar
Vittorio Caggiano committed
27

anj-s's avatar
anj-s committed
28
## Installation
29

30
To install FairScale, please see the following [instructions](https://github.com/facebookresearch/fairscale/blob/main/docs/source/installation_instructions.rst). You should be able to install a pip package or
anj-s's avatar
anj-s committed
31
build directly from source.
32

Vittorio Caggiano's avatar
Vittorio Caggiano committed
33
## Getting Started
anj-s's avatar
anj-s committed
34
The full [documentation](https://fairscale.readthedocs.io/) contains instructions for getting started, deep dives and tutorials about the various FairScale APIs.
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
35
36

## Examples
anj-s's avatar
anj-s committed
37
38
39

Here are a few sample snippets from a subset of FairScale offerings:

40
### Pipe
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
41
42
43

Run a 4-layer model on 2 GPUs. The first two layers run on cuda:0 and the next two layers run on cuda:1.

44
```python
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
45
46
47
48
49
50
51
52
import torch

import fairscale

model = torch.nn.Sequential(a, b, c, d)
model = fairscale.nn.Pipe(model, balance=[2, 2], devices=[0, 1], chunks=8)
```

53
### Optimizer state sharding (ZeRO)
54
See a more complete example [here](https://github.com/facebookresearch/fairscale/blob/main/benchmarks/oss.py), but a minimal example could look like the following :
55

56
```python
57
import torch
58
import torch.distributed as dist
59
import torch.multiprocessing as mp
60
from fairscale.optim.oss import OSS
61
from fairscale.nn.data_parallel import ShardedDataParallel as ShardedDDP
62
63
64
65
66
67

def train(
    rank: int,
    world_size: int,
    epochs: int):

68
69
    # DDP init example
    dist.init_process_group(backend='nccl', init_method="tcp://localhost:29501", rank=rank, world_size=world_size)
70
71

    # Problem statement
72
    model = myAwesomeModel().to(rank)
73
    dataloader = mySuperFastDataloader()
Benjamin Lefaudeux's avatar
Benjamin Lefaudeux committed
74
    loss_fn = myVeryRelevantLoss()
75
76
77
    base_optimizer = torch.optim.SGD # pick any pytorch compliant optimizer here
    base_optimizer_arguments = {} # pass any optimizer specific arguments here, or directly below when instantiating OSS

78
    # Wrap the optimizer in its state sharding brethren
79
80
    optimizer = OSS(params=model.parameters(), optim=base_optimizer, **base_optimizer_arguments)

81
82
83
    # Wrap the model into ShardedDDP, which will reduce gradients to the proper ranks
    model = ShardedDDP(model, optimizer)

84
85
86
87
88
89
90
91
92
93
94
    # Any relevant training loop, nothing specific to OSS. For example:
    model.train()
    for e in range(epochs):
        for batch in dataloader:
            # Train
            model.zero_grad()
            outputs = model(batch["inputs"])
            loss = loss_fn(outputs, batch["label"])
            loss.backward()
            optimizer.step()

95
96
    dist.destroy_process_group()

97
if __name__ == "__main__":
98
    # Supposing that WORLD_SIZE and EPOCHS are somehow defined somewhere
99
100
101
102
103
104
105
106
107
108
109
    mp.spawn(
        train,
        args=(
            WORLD_SIZE,
            EPOCHS,
        ),
        nprocs=WORLD_SIZE,
        join=True,
    )
```

110
### AdaScale SGD
111

112
113
114
115
AdaScale can be used to wrap a SGD optimizer and to be used in DDP (Distributed Data Parallel)
training or non-DDP with gradient accumulation. The benefit is to re-use the same LR
schedule from a baseline batch size when effective batch size is bigger.

116
117
Note that AdaScale does _not_ help increase per-GPU batch size.

118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
```python
from torch.optim import SGD
from torch.optim.lr_scheduler import LambdaLR  # or your scheduler
from fairscale.optim import AdaScale

...
optim = AdaScale(SGD(model.parameters(), lr=0.1))
scheduler = LambdaLR(optim, ...)
...
# Note: the train loop should be with DDP or with gradient accumulation.
last_epoch = 0
step = 0
done = False
while not done:
    for sample in dataset:
        ...
        step += optim.gain()
        optim.step()
        epoch = step // len(dataset)
        if last_epoch != epoch:
            scheduler.step()
            last_epoch = epoch
        if epoch > max_epoch:
            done = True
```

144
Primary goal is to allow scaling to bigger batch sizes without losing model accuracy.
145
(However, training time might be longer comparing to without AdaScale.)
146
147

At a high level, we want ML researchers to:
148
  * go parallel more easily (i.e. no need to find new learning rate schedules)
149
  * not worrying about losing accuracy
150
  * potentially higher GPU efficiency (fewer steps, less networking overhead, etc.)
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
151

anj-s's avatar
anj-s committed
152
## Testing
153

154
We use circleci to test on PyTorch versions 1.6.0, 1.7.1, and 1.8.1. Please create an [issue](https://github.com/facebookresearch/fairscale/issues) if you are having trouble with installation.
155

Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
156
157
## Contributors

anj-s's avatar
anj-s committed
158
We welcome outside contributions! Please see the [CONTRIBUTING](CONTRIBUTING.md) instructions for how you can contribute to FairScale.
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
159
160
161

## License

anj-s's avatar
anj-s committed
162
FairScale is licensed under the [BSD-3-Clause License](LICENSE).
163
164
165
166
167

fairscale.nn.pipe is forked from [torchgpipe](https://github.com/kakaobrain/torchgpipe), Copyright 2019, Kakao Brain, licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

fairscale.nn.model_parallel is forked from [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), Copyright 2020, NVIDIA CORPORATION, licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

168
169
fairscale.optim.adascale is forked from [AdaptDL](https://github.com/petuum/adaptdl), Copyright 2020, Petuum, Inc., licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

Myle Ott's avatar
Myle Ott committed
170
171
fairscale.nn.misc.flatten_params_wrapper is forked from [PyTorch-Reparam-Module](https://github.com/SsnL/PyTorch-Reparam-Module), Copyright 2018, Tongzhou Wang, licensed under [MIT License](https://github.com/SsnL/PyTorch-Reparam-Module/blob/master/LICENSE).

172

anj-s's avatar
anj-s committed
173
## Citing FairScale
174

anj-s's avatar
anj-s committed
175
176
177
178
If you use FairScale in your publication, please cite it by using the following BibTeX entry.

```BibTeX
@Misc{FairScale2021,
Vittorio Caggiano's avatar
Vittorio Caggiano committed
179
  author =       {Mandeep Baines and Shruti Bhosale and Vittorio Caggiano and Naman Goyal and Siddharth Goyal and Myle Ott and Benjamin Lefaudeux and Vitaliy Liptchinsky and Mike Rabbat and Sam Sheiffer and Anjali Sridhar and Min Xu},
anj-s's avatar
anj-s committed
180
181
182
183
184
  title =        {FairScale:  A general purpose modular PyTorch library for high performance and large scale training},
  howpublished = {\url{https://github.com/facebookresearch/fairscale}},
  year =         {2021}
}
```
185
186
187
188
189
190
191
192
193
194
195
196
197

## FAQ
1. If you experience an error indicating a default branch does not exist, it probably due to the latest update, switching the default branch from "master" to "main"
```
error: pathspec 'non-existing-branch' did not match any file(s) known to git
```
Please run the following commands to update to the main branch.
```
git branch -m master main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a
```