README.md 8.89 KB
Newer Older
Vittorio Caggiano's avatar
Vittorio Caggiano committed
1
2
![FairScale Logo](./docs/source/_static/img/fairscale-logo.png)

Vittorio Caggiano's avatar
Vittorio Caggiano committed
3
4
![PyPI](https://img.shields.io/pypi/v/fairscale)
[![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest)
5
[![CircleCI](https://circleci.com/gh/facebookresearch/fairscale.svg?style=shield)](https://app.circleci.com/pipelines/github/facebookresearch/fairscale/) ![PyPI - License](https://img.shields.io/pypi/l/fairscale) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/main/CONTRIBUTING.md)
Vittorio Caggiano's avatar
Vittorio Caggiano committed
6
--------------------------------------------------------------------------------
Vittorio Caggiano's avatar
Vittorio Caggiano committed
7
8

## Description
9
10
11
12
FairScale is a PyTorch extension library for high performance and large scale training.
This library extends basic PyTorch capabilities while adding new SOTA scaling techniques.
FairScale makes available the latest distributed training techniques in the form of composable
modules and easy to use APIs. These APIs are a fundamental part of a researcher's toolbox as
anj-s's avatar
anj-s committed
13
they attempt to scale models with limited resources.
Vittorio Caggiano's avatar
Vittorio Caggiano committed
14

anj-s's avatar
anj-s committed
15
FairScale was designed with the following values in mind:
Vittorio Caggiano's avatar
Vittorio Caggiano committed
16

anj-s's avatar
anj-s committed
17
18
* **Usability** -  Users should be able to understand and use FairScale APIs with minimum cognitive overload.

19
* **Modularity** - Users should be able to combine multiple FairScale APIs as part of their training loop seamlessly.
anj-s's avatar
anj-s committed
20
21

* **Performance** - FairScale APIs provide the best performance in terms of scaling and efficiency.
Vittorio Caggiano's avatar
Vittorio Caggiano committed
22

23
24
## What's New:

25
* FairScale is tested with the following PyTorch versions (with CUDA 11.2): 1.8.1, 1.10.0 and 1.11.0.dev20211101+cu111.
Min Xu's avatar
Min Xu committed
26
* November 2021 [fairscale 0.4.3 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.3).
anj-s's avatar
anj-s committed
27
* We have experimental support for offloading params to disk when using the FSDP API for evaluation workloads.
Min Xu's avatar
Min Xu committed
28
* We have an experimental layer that fuses multiple layers together to support large vocab size trainings.
29
30
31
* November 2021 [fairscale 0.4.2 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.2).
* We have a new experimental API called the LayerwiseMemoryTracker to help track, visualize and suggest fixes for memory issues occurring during the forward/backward pass of your models.
* Introducing SlowMoDistributedDataParallel API, a distributed training wrapper that is useful on clusters with slow network interconnects (e.g. Ethernet).
32
* September 2021 [`master` branch renamed to `main`](https://github.com/github/renaming).
Vittorio Caggiano's avatar
Vittorio Caggiano committed
33

anj-s's avatar
anj-s committed
34
## Installation
35

36
37
To install FairScale, please see the following [instructions](https://github.com/facebookresearch/fairscale/blob/main/docs/source/installation_instructions.rst).
You should be able to install a package with pip or conda, or build directly from source.
38

Vittorio Caggiano's avatar
Vittorio Caggiano committed
39
## Getting Started
anj-s's avatar
anj-s committed
40
The full [documentation](https://fairscale.readthedocs.io/) contains instructions for getting started, deep dives and tutorials about the various FairScale APIs.
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
41
42

## Examples
anj-s's avatar
anj-s committed
43
44
45

Here are a few sample snippets from a subset of FairScale offerings:

46
### Pipe
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
47
48
49

Run a 4-layer model on 2 GPUs. The first two layers run on cuda:0 and the next two layers run on cuda:1.

50
```python
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
51
52
53
54
55
56
57
58
import torch

import fairscale

model = torch.nn.Sequential(a, b, c, d)
model = fairscale.nn.Pipe(model, balance=[2, 2], devices=[0, 1], chunks=8)
```

59
### Optimizer state sharding (ZeRO)
60
See a more complete example [here](https://github.com/facebookresearch/fairscale/blob/main/benchmarks/oss.py), but a minimal example could look like the following :
61

62
```python
63
import torch
64
import torch.distributed as dist
65
import torch.multiprocessing as mp
66
from fairscale.optim.oss import OSS
67
from fairscale.nn.data_parallel import ShardedDataParallel as ShardedDDP
68
69
70
71
72
73

def train(
    rank: int,
    world_size: int,
    epochs: int):

74
75
    # DDP init example
    dist.init_process_group(backend='nccl', init_method="tcp://localhost:29501", rank=rank, world_size=world_size)
76
77

    # Problem statement
78
    model = myAwesomeModel().to(rank)
79
    dataloader = mySuperFastDataloader()
Benjamin Lefaudeux's avatar
Benjamin Lefaudeux committed
80
    loss_fn = myVeryRelevantLoss()
81
82
83
    base_optimizer = torch.optim.SGD # pick any pytorch compliant optimizer here
    base_optimizer_arguments = {} # pass any optimizer specific arguments here, or directly below when instantiating OSS

84
    # Wrap the optimizer in its state sharding brethren
85
86
    optimizer = OSS(params=model.parameters(), optim=base_optimizer, **base_optimizer_arguments)

87
88
89
    # Wrap the model into ShardedDDP, which will reduce gradients to the proper ranks
    model = ShardedDDP(model, optimizer)

90
91
92
93
94
95
96
97
98
99
100
    # Any relevant training loop, nothing specific to OSS. For example:
    model.train()
    for e in range(epochs):
        for batch in dataloader:
            # Train
            model.zero_grad()
            outputs = model(batch["inputs"])
            loss = loss_fn(outputs, batch["label"])
            loss.backward()
            optimizer.step()

101
102
    dist.destroy_process_group()

103
if __name__ == "__main__":
104
    # Supposing that WORLD_SIZE and EPOCHS are somehow defined somewhere
105
106
107
108
109
110
111
112
113
114
115
    mp.spawn(
        train,
        args=(
            WORLD_SIZE,
            EPOCHS,
        ),
        nprocs=WORLD_SIZE,
        join=True,
    )
```

116
### AdaScale SGD
117

118
119
120
121
AdaScale can be used to wrap a SGD optimizer and to be used in DDP (Distributed Data Parallel)
training or non-DDP with gradient accumulation. The benefit is to re-use the same LR
schedule from a baseline batch size when effective batch size is bigger.

122
123
Note that AdaScale does _not_ help increase per-GPU batch size.

124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
```python
from torch.optim import SGD
from torch.optim.lr_scheduler import LambdaLR  # or your scheduler
from fairscale.optim import AdaScale

...
optim = AdaScale(SGD(model.parameters(), lr=0.1))
scheduler = LambdaLR(optim, ...)
...
# Note: the train loop should be with DDP or with gradient accumulation.
last_epoch = 0
step = 0
done = False
while not done:
    for sample in dataset:
        ...
        step += optim.gain()
        optim.step()
        epoch = step // len(dataset)
        if last_epoch != epoch:
            scheduler.step()
            last_epoch = epoch
        if epoch > max_epoch:
            done = True
```

150
Primary goal is to allow scaling to bigger batch sizes without losing model accuracy.
151
(However, training time might be longer comparing to without AdaScale.)
152
153

At a high level, we want ML researchers to:
154
  * go parallel more easily (i.e. no need to find new learning rate schedules)
155
  * not worrying about losing accuracy
156
  * potentially higher GPU efficiency (fewer steps, less networking overhead, etc.)
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
157

anj-s's avatar
anj-s committed
158
## Testing
159

Anupam Bhatnagar's avatar
Anupam Bhatnagar committed
160
We use circleci to test FairScale with the following PyTorch versions (with CUDA 11.2):
161
162
* the latest stable release (1.10.0)
* the latest LTS release (1.8.1)
163
* a recent nightly release (1.11.0.dev20211101+cu111)
Anupam Bhatnagar's avatar
Anupam Bhatnagar committed
164
165

Please create an [issue](https://github.com/facebookresearch/fairscale/issues) if you are having trouble with installation.
166

Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
167
168
## Contributors

anj-s's avatar
anj-s committed
169
We welcome outside contributions! Please see the [CONTRIBUTING](CONTRIBUTING.md) instructions for how you can contribute to FairScale.
Mandeep Singh Baines's avatar
Mandeep Singh Baines committed
170
171
172

## License

anj-s's avatar
anj-s committed
173
FairScale is licensed under the [BSD-3-Clause License](LICENSE).
174
175
176
177
178

fairscale.nn.pipe is forked from [torchgpipe](https://github.com/kakaobrain/torchgpipe), Copyright 2019, Kakao Brain, licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

fairscale.nn.model_parallel is forked from [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), Copyright 2020, NVIDIA CORPORATION, licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

179
180
fairscale.optim.adascale is forked from [AdaptDL](https://github.com/petuum/adaptdl), Copyright 2020, Petuum, Inc., licensed under [Apache License](http://www.apache.org/licenses/LICENSE-2.0).

Myle Ott's avatar
Myle Ott committed
181
182
fairscale.nn.misc.flatten_params_wrapper is forked from [PyTorch-Reparam-Module](https://github.com/SsnL/PyTorch-Reparam-Module), Copyright 2018, Tongzhou Wang, licensed under [MIT License](https://github.com/SsnL/PyTorch-Reparam-Module/blob/master/LICENSE).

183

anj-s's avatar
anj-s committed
184
## Citing FairScale
185

anj-s's avatar
anj-s committed
186
187
188
189
If you use FairScale in your publication, please cite it by using the following BibTeX entry.

```BibTeX
@Misc{FairScale2021,
Vittorio Caggiano's avatar
Vittorio Caggiano committed
190
  author =       {Mandeep Baines and Shruti Bhosale and Vittorio Caggiano and Naman Goyal and Siddharth Goyal and Myle Ott and Benjamin Lefaudeux and Vitaliy Liptchinsky and Mike Rabbat and Sam Sheiffer and Anjali Sridhar and Min Xu},
anj-s's avatar
anj-s committed
191
192
193
194
195
  title =        {FairScale:  A general purpose modular PyTorch library for high performance and large scale training},
  howpublished = {\url{https://github.com/facebookresearch/fairscale}},
  year =         {2021}
}
```
196
197
198
199
200
201
202
203
204
205
206
207
208

## FAQ
1. If you experience an error indicating a default branch does not exist, it probably due to the latest update, switching the default branch from "master" to "main"
```
error: pathspec 'non-existing-branch' did not match any file(s) known to git
```
Please run the following commands to update to the main branch.
```
git branch -m master main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a
```