- 30 Mar, 2022 1 commit
-
-
Paul Johnson authored
This is no longer needed since isort's version is 5.10 Also fix black version to 22.3.0 to fix issue with click dependency. Update files that now fail with new version of black {a = 2 ** 4} -> {a = 2**4}
-
- 28 Oct, 2020 1 commit
-
-
msbaines authored
-
- 03 Sep, 2020 1 commit
-
-
Jun Ru Anderson authored
Add GradScaler to Fairscale, subclassing PyTorch's GradScaler. Use GradScaler in the pipe benchmark; though it is not needed in this case, it is a good example of how to use gradient scaling for larger models that do require gradient scaling in order to converge. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 22 Aug, 2020 1 commit
-
-
Jun Ru Anderson authored
Implement scaling of optimizer state when using pure-fp16 training to avoid underflow. Update benchmark to use pure-fp16. Modify state_dict methods to store and load the optimizer state scale. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 21 Aug, 2020 1 commit
-
-
Jun Ru Anderson authored
Set the torch seed for tests. xfail mixed precision and memory-efficient mixed-precision state_dict tests due to their states being cast to FP16 and back to FP32 during load_state_dict. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 19 Aug, 2020 1 commit
-
-
Jun Ru Anderson authored
Refactor tests to remove duplicated code. Fix the state_dict test to instantiate the second optimizer with the correct precision. Fix Adam.load_state_dict to make optimizer state the right type. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 18 Aug, 2020 1 commit
-
-
Jun Ru Anderson authored
Allow training with optimizer state in fp16. Use an enum to select from full-precision, mixed precision, memory efficient mixed precision and pure fp16. Improve clarity of testing code Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 14 Aug, 2020 1 commit
-
-
Jun Ru Anderson authored
Add support for mixed-precision (half precision params, full precision gradients) and memory-efficient (half precision params and half precision gradients) training with Adam Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 31 Jul, 2020 1 commit
-
-
Jun Ru Anderson authored
Add FusedAdam, update benchmark and add tests. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-