Unverified Commit 3e36cd07 authored by tmarkstrum's avatar tmarkstrum Committed by GitHub
Browse files

[chore] 0.4.6 release (#953)

* [chore] 0.4.6 release

* added the third party libs removed by precommit
parent 8fa26ae4
......@@ -5,8 +5,14 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.4.6] - TBD
## [0.4.7] - TBD
### Added
### Fixed
[0.4.6] - 2022-03-08
### Added
- CosFace's LMCL is added to MEVO. This is a loss function that is suitable
......@@ -18,6 +24,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
if set, wraps the root module regardless of how many unwrapped params there were
left after children were wrapped. [#930]
- FSDP: Add support for saving optimizer state when using expert replicas with FSDP.
- OSS: Add a new arg "forced_broadcast_object" to OSS __init__ to apply "_broadcast_object"
for rebuilding the sharded optimizer. [#937]
- FSDP: Add an arg disable_reshard_on_root for FSDP __init__ [#878]
### Fixed
- FSDP: fixed handling of internal states with state_dict and load_state_dict
......
......@@ -27,6 +27,8 @@ FairScale was designed with the following values in mind:
## What's New:
* March 2022 [fairscale 0.4.6 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.6).
* We have support for CosFace's LMCL in MEVO. This is a loss function that is suitable for large number of prediction target classes.
* January 2022 [fairscale 0.4.5 was released](https://github.com/facebookresearch/fairscale/releases/tag/v0.4.5).
* We have experimental support for layer wise gradient scaling.
* We enabled reduce_scatter operation overlapping in FSDP backward propagation.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment