index.rst 1.61 KB
Newer Older
1
.. FairScale documentation master file, created by
2
   sphinx-quickstart on Tue Sep  8 16:19:17 2020.
Min Xu's avatar
Min Xu committed
3
4
5
   You can adapt this file completely to your liking,
   but it should at least contain the root `toctree`
   directive.
6

7
Welcome to FairScale's documentation!
8
9
=====================================

10
*FairScale* is a PyTorch extension library for high performance and
Min Xu's avatar
Min Xu committed
11
12
13
large scale training for optimizing training on one or across multiple
machines/nodes. This library extend basic pytorch capabilities while
adding new experimental ones.
14
15
16
17
18
19


Components
----------

* Parallelism:
20
   * `Pipeline parallelism <../../en/latest/api/nn/pipe.html>`_
21

22
23
24
25
* Sharded training:
    * `Optimizer state sharding <../../en/latest/api/optim/oss.html>`_
    * `Sharded grad scaler - automatic mixed precision <../../en/latest/api/optim/grad_scaler.html>`_
    * `Sharded distributed data parallel <../../en/latest/api/nn/sharded_ddp.html>`_
Min Xu's avatar
Min Xu committed
26
    * `Fully Sharded Data Parallel FSDP <../../en/latest/api/nn/fsdp.html>`_
27
28

* Optimization at scale:
29
   * `AdaScale SGD <../../en/latest/api/optim/adascale.html>`_
30

31
32
33
* GPU memory optimization:
   * `Activation checkpointing wrapper <../../en/latest/api/nn/misc/checkpoint_activations.html>`_

34

35
36
37
* `Tutorials <../../en/latest/tutorials/index.html>`_


38
.. warning::
Min Xu's avatar
Min Xu committed
39
40
41
42
    This library is under active development.
    Please be mindful and create an
    `issue <https://github.com/facebookresearch/fairscale/issues>`_
    if you have any trouble and/or suggestion.
43

44
45
46
47
48
49
50
51
.. toctree::
   :maxdepth: 5
   :caption: Contents:
   :hidden:

   tutorials/index
   api/index

52
53
54
55
56

Reference
=========

:ref:`genindex` | :ref:`modindex` | :ref:`search`