"tests/vscode:/vscode.git/clone" did not exist on "efed9cee0114e80955f3478175ff45daafd62479"
README.md 9.49 KB
Newer Older
Vittorio Caggiano's avatar
Vittorio Caggiano committed
1
2
![FairScale Logo](./docs/source/_static/img/fairscale-logo.png)

Vittorio Caggiano's avatar
Vittorio Caggiano committed
3
4
![PyPI](https://img.shields.io/pypi/v/fairscale)
[![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest)
Vittorio Caggiano's avatar
Vittorio Caggiano committed
5
[![CircleCI](https://circleci.com/gh/facebookresearch/fairscale.svg?style=shield)](https://app.circleci.com/pipelines/github/facebookresearch/fairscale/) ![PyPI - License](https://img.shields.io/pypi/l/fairscale) [![Downloads](https://pepy.tech/badge/fairscale)](https://pepy.tech/project/fairscale) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/main/CONTRIBUTING.md)
Vittorio Caggiano's avatar
Vittorio Caggiano committed
6
--------------------------------------------------------------------------------
Vittorio Caggiano's avatar
Vittorio Caggiano committed
7
8

## Description
9
10
11
12
FairScale is a PyTorch extension library for high performance and large scale training.
This library extends basic PyTorch capabilities while adding new SOTA scaling techniques.
FairScale makes available the latest distributed training techniques in the form of composable
modules and easy to use APIs. These APIs are a fundamental part of a researcher's toolbox as
anj-s's avatar
anj-s committed
13
they attempt to scale models with limited resources.
Vittorio Caggiano's avatar
Vittorio Caggiano committed
14

anj-s's avatar
anj-s committed
15
FairScale was designed with the following values in mind:
Vittorio Caggiano's avatar
Vittorio Caggiano committed
16

anj-s's avatar
anj-s committed
17
18
* **Usability** -  Users should be able to understand and use FairScale APIs with minimum cognitive overload.

19
* **Modularity** - Users should be able to combine multiple FairScale APIs as part of their training loop seamlessly.
anj-s's avatar
anj-s committed
20
21

* **Performance** - FairScale APIs provide the best performance in terms of scaling and efficiency.
Vittorio Caggiano's avatar
Vittorio Caggiano committed
22

23
24
25
26
## Watch Introductory Video

[![Explain Like I’m 5: FairScale](https://img.youtube.com/vi/oDt7ebOwWIc/0.jpg)](https://www.youtube.com/watch?v=oDt7ebOwWIc)

27
28
## What's New:

tmarkstrum's avatar
tmarkstrum committed
29
30
31
* January 2022 [fairscale 0.4.5 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.5).
* We have experimental support for layer wise gradient scaling.
* We enabled reduce_scatter operation overlapping in FSDP backward propagation.
Anupam Bhatnagar's avatar
Anupam Bhatnagar committed
32
* December 2021 [fairscale 0.4.4 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.4).
33
* FairScale is tested with the following PyTorch versions (with CUDA 11.2): 1.8.1, 1.10.0 and 1.11.0.dev20211101+cu111.
Min Xu's avatar
Min Xu committed
34
* November 2021 [fairscale 0.4.3 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.3).
anj-s's avatar
anj-s committed
35
* We have experimental support for offloading params to disk when using the FSDP API for evaluation workloads.
Min Xu's avatar
Min Xu committed
36
* We have an experimental layer that fuses multiple layers together to support large vocab size trainings.
37
38
39
* November 2021 [fairscale 0.4.2 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.2).
* We have a new experimental API called the LayerwiseMemoryTracker to help track, visualize and suggest fixes for memory issues occurring during the forward/backward pass of your models.
* Introducing SlowMoDistributedDataParallel API, a distributed training wrapper that is useful on clusters with slow network interconnects (e.g. Ethernet).
40
* September 2021 [`master` branch renamed to `main`](https://github.com/github/renaming).
Vittorio Caggiano's avatar
Vittorio Caggiano committed
41

anj-s's avatar
anj-s committed
42
## Installation
43

44
45
To install FairScale, please see the following [instructions](https://github.com/facebookresearch/fairscale/blob/main/docs/source/installation_instructions.rst).
You should be able to install a package with pip or conda, or build directly from source.
46

Vittorio Caggiano's avatar
Vittorio Caggiano committed
47
## Getting Started
anj-s's avatar
anj-s committed
48
The full [documentation](https://fairscale.readthedocs.io/) contains instructions for getting started, deep dives and tutorials about the various FairScale APIs.
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
49
50

## Examples
anj-s's avatar
anj-s committed
51
52
53

Here are a few sample snippets from a subset of FairScale offerings:

54
### Pipe
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
55
56
57

Run a 4-layer model on 2 GPUs. The first two layers run on cuda:0 and the next two layers run on cuda:1.

58
```python
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
59
60
61
62
63
64
65
66
import torch

import fairscale

model = torch.nn.Sequential(a, b, c, d)
model = fairscale.nn.Pipe(model, balance=[2, 2], devices=[0, 1], chunks=8)
```

67
### Optimizer state sharding (ZeRO)
68
See a more complete example [here](https://github.com/facebookresearch/fairscale/blob/main/benchmarks/oss.py), but a minimal example could look like the following :
69

70
```python
71
import torch
72
import torch.distributed as dist
73
import torch.multiprocessing as mp
74
from fairscale.optim.oss import OSS
75
from fairscale.nn.data_parallel import ShardedDataParallel as ShardedDDP
76
77
78
79
80
81

def train(
    rank: int,
    world_size: int,
    epochs: int):

82
83
    # DDP init example
    dist.init_process_group(backend='nccl', init_method="tcp://localhost:29501", rank=rank, world_size=world_size)
84
85

    # Problem statement
86
    model = myAwesomeModel().to(rank)
87
    dataloader = mySuperFastDataloader()
Benjamin Lefaudeux's avatar
Benjamin Lefaudeux committed
88
    loss_fn = myVeryRelevantLoss()
89
90
91
    base_optimizer = torch.optim.SGD # pick any pytorch compliant optimizer here
    base_optimizer_arguments = {} # pass any optimizer specific arguments here, or directly below when instantiating OSS

92
    # Wrap the optimizer in its state sharding brethren
93
94
    optimizer = OSS(params=model.parameters(), optim=base_optimizer, **base_optimizer_arguments)

95
96
97
    # Wrap the model into ShardedDDP, which will reduce gradients to the proper ranks
    model = ShardedDDP(model, optimizer)

98
99
100
101
102
103
104
105
106
107
108
    # Any relevant training loop, nothing specific to OSS. For example:
    model.train()
    for e in range(epochs):
        for batch in dataloader:
            # Train
            model.zero_grad()
            outputs = model(batch["inputs"])
            loss = loss_fn(outputs, batch["label"])
            loss.backward()
            optimizer.step()

109
110
    dist.destroy_process_group()

111
if __name__ == "__main__":
112
    # Supposing that WORLD_SIZE and EPOCHS are somehow defined somewhere
113
114
115
116
117
118
119
120
121
122
123
    mp.spawn(
        train,
        args=(
            WORLD_SIZE,
            EPOCHS,
        ),
        nprocs=WORLD_SIZE,
        join=True,
    )
```

124
### AdaScale SGD
125

126
127
128
129
AdaScale can be used to wrap a SGD optimizer and to be used in DDP (Distributed Data Parallel)
training or non-DDP with gradient accumulation. The benefit is to re-use the same LR
schedule from a baseline batch size when effective batch size is bigger.

130
131
Note that AdaScale does _not_ help increase per-GPU batch size.

132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
```python
from torch.optim import SGD
from torch.optim.lr_scheduler import LambdaLR  # or your scheduler
from fairscale.optim import AdaScale

...
optim = AdaScale(SGD(model.parameters(), lr=0.1))
scheduler = LambdaLR(optim, ...)
...
# Note: the train loop should be with DDP or with gradient accumulation.
last_epoch = 0
step = 0
done = False
while not done:
    for sample in dataset:
        ...
        step += optim.gain()
        optim.step()
        epoch = step // len(dataset)
        if last_epoch != epoch:
            scheduler.step()
            last_epoch = epoch
        if epoch > max_epoch:
            done = True
```

158
Primary goal is to allow scaling to bigger batch sizes without losing model accuracy.
159
(However, training time might be longer comparing to without AdaScale.)
160
161

At a high level, we want ML researchers to:
162
  * go parallel more easily (i.e. no need to find new learning rate schedules)
163
  * not worrying about losing accuracy
164
  * potentially higher GPU efficiency (fewer steps, less networking overhead, etc.)
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
165

anj-s's avatar
anj-s committed
166
## Testing
167

Anupam Bhatnagar's avatar
Anupam Bhatnagar committed
168
We use circleci to test FairScale with the following PyTorch versions (with CUDA 11.2):
169
170
* the latest stable release (1.10.0)
* the latest LTS release (1.8.1)
171
* a recent nightly release (1.11.0.dev20211101+cu111)
Anupam Bhatnagar's avatar
Anupam Bhatnagar committed
172
173

Please create an [issue](https://github.com/facebookresearch/fairscale/issues) if you are having trouble with installation.
174

Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
175
176
## Contributors

anj-s's avatar
anj-s committed
177
We welcome outside contributions! Please see the [CONTRIBUTING](CONTRIBUTING.md) instructions for how you can contribute to FairScale.
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
178
179
180

## License

anj-s's avatar
anj-s committed
181
FairScale is licensed under the [BSD-3-Clause License](LICENSE).
182
183
184
185
186

fairscale.nn.pipe is forked from [torchgpipe](https://github.com/kakaobrain/torchgpipe), Copyright 2019, Kakao Brain, licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

fairscale.nn.model_parallel is forked from [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), Copyright 2020, NVIDIA CORPORATION, licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

187
188
fairscale.optim.adascale is forked from [AdaptDL](https://github.com/petuum/adaptdl), Copyright 2020, Petuum, Inc., licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

Myle Ott's avatar
Myle Ott committed
189
190
fairscale.nn.misc.flatten_params_wrapper is forked from [PyTorch-Reparam-Module](https://github.com/SsnL/PyTorch-Reparam-Module), Copyright 2018, Tongzhou Wang, licensed under [MIT License](https://github.com/SsnL/PyTorch-Reparam-Module/blob/master/LICENSE).

191

anj-s's avatar
anj-s committed
192
## Citing FairScale
193

anj-s's avatar
anj-s committed
194
195
196
197
If you use FairScale in your publication, please cite it by using the following BibTeX entry.

```BibTeX
@Misc{FairScale2021,
Vittorio Caggiano's avatar
Vittorio Caggiano committed
198
  author =       {Mandeep Baines and Shruti Bhosale and Vittorio Caggiano and Naman Goyal and Siddharth Goyal and Myle Ott and Benjamin Lefaudeux and Vitaliy Liptchinsky and Mike Rabbat and Sam Sheiffer and Anjali Sridhar and Min Xu},
anj-s's avatar
anj-s committed
199
200
201
202
203
  title =        {FairScale:  A general purpose modular PyTorch library for high performance and large scale training},
  howpublished = {\url{https://github.com/facebookresearch/fairscale}},
  year =         {2021}
}
```
204
205
206
207
208
209
210
211
212
213
214
215
216

## FAQ
1. If you experience an error indicating a default branch does not exist, it probably due to the latest update, switching the default branch from "master" to "main"
```
error: pathspec 'non-existing-branch' did not match any file(s) known to git
```
Please run the following commands to update to the main branch.
```
git branch -m master main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a
```