Unverified Commit 93d115c6 authored by Min Xu's avatar Min Xu Committed by GitHub
Browse files

update readme (#439)

parent 506d6209
......@@ -10,20 +10,22 @@ FairScale is a PyTorch extension library for high performance and large scale tr
FairScale supports:
* Parallelism:
* Pipeline parallelism (fairscale.nn.pipe)
* Asynchronous Pipeline parallelism (fairscale.nn.async_pipe)
* Mixture of experts (fairscale.nn.moe.moe_layer)
* Model Parallelism (fairscale.nn.model_parallel.layers)
* _experimental_ AmpNet (fairscale.experimental.nn.ampnet_pipe)
* Pipeline parallelism (`fairscale.nn.pipe`)
* Asynchronous Pipeline parallelism (`fairscale.nn.async_pipe`)
* Mixture of experts (`fairscale.nn.moe.moe_layer`)
* Model Parallelism (`fairscale.nn.model_parallel.layers`)
* _experimental_ AmpNet (`fairscale.experimental.nn.ampnet_pipe`)
* Sharded training:
* Optimizer state sharding (fairscale.optim.OSS)
* Sharded grad scaler - automatic mixed precision (fairscale.optim.grad_scaler)
* Sharded distributed data parallel (fairscale.nn.ShardedDataParallel)
* Fully Sharded Data Parallel (FSDP) (fairscale.nn.FullyShardedDataParallel)
* Optimizer state sharding (`fairscale.optim.OSS`)
* Sharded Data Parallel (SDP) (`fairscale.nn.ShardedDataParallel`)
* Fully Sharded Data Parallel (FSDP) (`fairscale.nn.FullyShardedDataParallel`)
* Optimization at scale:
* AdaScale SGD (fairscale.optim.AdaScale)
* AdaScale SGD (`fairscale.optim.AdaScale`)
* GPU memory optimization:
* Activation checkpointing wrapper(fairscale.nn.misc.checkpoint_wrapper)
* Activation checkpointing wrapper (`fairscale.nn.misc.checkpoint_wrapper`)
* _experimental_ CPU offloaded model (`fairscale.experimental.nn.offload.OffloadModel`)
* GPU speed optimization:
* Sharded grad scaler - automatic mixed precision (`fairscale.optim.grad_scaler`)
## Requirements
......
# Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the BSD license found in the
# LICENSE file in the root directory of this source tree.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment