Unverified Commit 4a3bd93a authored by tmarkstrum's avatar tmarkstrum Committed by GitHub
Browse files

[Chore]release 0.4.5 (#911)

* release 0.4.5

* added some content for the release

* fixed a format issue.
parent 10d21b38
......@@ -4,15 +4,17 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.4.5] - TBD
## [0.4.6] - TBD
## [0.4.5] - 2022-01-14
### Added
- Layer-wise Gradient Scaling [new feature][experimental] Layer-wise gradient
scaling helps overcomes gradient overflow issues. When used in conjunction with
mixed precision, it enables training larger models and makes the training
process more stable, especially in deep networks [#879]
- FSDP: Added state_dict_on_rank_0_only flag allow user choose to return full
state dict on rank 0 and return empty dict non-rank 0 to prevent OOM [#844]
- FSDP: Added process_group_reduce_scatter parameter to allow users to pass in the process group that is used for reduce scatter operation. [#897]
- FSDP: Added state_dict_on_rank_0_only flag allow user choose to return full state dict on rank 0 and return empty dict non-rank 0 to prevent OOM [#844]
......
......@@ -22,6 +22,9 @@ FairScale was designed with the following values in mind:
## What's New:
* January 2022 [fairscale 0.4.5 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.5).
* We have experimental support for layer wise gradient scaling.
* We enabled reduce_scatter operation overlapping in FSDP backward propagation.
* December 2021 [fairscale 0.4.4 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.4).
* FairScale is tested with the following PyTorch versions (with CUDA 11.2): 1.8.1, 1.10.0 and 1.11.0.dev20211101+cu111.
* November 2021 [fairscale 0.4.3 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.3).
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment