Unverified Commit 47e57935 authored by Vittorio Caggiano's avatar Vittorio Caggiano Committed by GitHub
Browse files

Logo (#227)



* add logo

* Update README.md
Co-authored-by: default avatarVittorio Caggiano <caggiano@fb.com>
parent 6d223777
# fairscale
![FairScale Logo](./docs/source/_static/img/fairscale-logo.png)
![PyPI](https://img.shields.io/pypi/v/fairscale)
[![Documentation Status](https://readthedocs.org/projects/fairscale/badge/?version=latest)](https://fairscale.readthedocs.io/en/latest/?badge=latest)
[![CircleCI](https://circleci.com/gh/facebookresearch/fairscale.svg?style=shield)](https://app.circleci.com/pipelines/github/facebookresearch/fairscale/) ![PyPI - License](https://img.shields.io/pypi/l/fairscale) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/facebookresearch/fairscale/blob/master/CONTRIBUTING.md)
--------------------------------------------------------------------------------
## Description
fairscale is a PyTorch extension library for high performance and large scale training for optimizing training on one or across multiple machines/nodes. This library extend basic pytorch capabilities while adding new experimental ones.
......@@ -9,9 +11,11 @@ fairscale is a PyTorch extension library for high performance and large scale tr
fairscale supports:
* Parallelism:
* pipeline parallelism (fairscale.nn.Pipe)
* tensor parallelism (fairscale.nn.model_parallel)
* Optimization:
* optimizer state sharding (fairscale.optim.oss)
* Sharded training:
* Optimizer state sharding (fairscale.optim.oss)
* Sharded grad scaler - automatic mixed precision
* Sharded distributed data parallel
* Optimization at scale:
* AdaScale SGD (from fairscale.optim import AdaScale)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment