- 28 Aug, 2020 1 commit
-
-
Min Xu authored
- added train(mode) method to be aware of eval mode
-
- 27 Aug, 2020 4 commits
-
-
msbaines authored
Workaround PyTorch bug that casts state (pytorch/pytorch#43706). Copied from https://github.com/pytorch/fairseq/blob/v0.9.0/fairseq/optim/fp16_optimizer.py#L251-L268
-
msbaines authored
-
msbaines authored
-
msbaines authored
* [fix] optim/oss: support optimizers with additional step kwargs Some of the optimizers in apex support additional kwargs to step such as scale.
-
- 22 Aug, 2020 1 commit
-
-
Jun Ru Anderson authored
Implement scaling of optimizer state when using pure-fp16 training to avoid underflow. Update benchmark to use pure-fp16. Modify state_dict methods to store and load the optimizer state scale. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 21 Aug, 2020 2 commits
-
-
Benjamin Lefaudeux authored
* initial commit, dummy training loop, pure pytorch but not DDP * probably slightly broken, but rough DDP benchmark run * adding the torchvision requirement for testing * brainfart * reduce the loss, do something slightly distributed * Some cleanup, distributing the training on two GPUs * some cleanup + adding a vanilla run, still not good to go * less silly defaults, gtg for a start I think * smaller batch to fit the smaller gpus used in the circleci rigs * Adding some options for the benchmark, and regression testing * [test] set torch seed for Adam tests (#49) Set the torch seed for tests. xfail mixed precision and memory-efficient mixed-precision state_dict tests due to their states being cast to FP16 and back to FP32 during load_state_dict. Co-authored-by:
Jun Ru Anderson <andersonic@fb.com> * linting, I really need to automate this isort insanity Co-authored-by:
Jun Ru Anderson <33384298+andersonic@users.noreply.github.com> Co-authored-by:
Jun Ru Anderson <andersonic@fb.com>
-
Jun Ru Anderson authored
Set the torch seed for tests. xfail mixed precision and memory-efficient mixed-precision state_dict tests due to their states being cast to FP16 and back to FP32 during load_state_dict. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 20 Aug, 2020 1 commit
-
-
Benjamin Lefaudeux authored
* move the restored param groups to the original device * adding a corresponding test
-
- 19 Aug, 2020 1 commit
-
-
Jun Ru Anderson authored
Refactor tests to remove duplicated code. Fix the state_dict test to instantiate the second optimizer with the correct precision. Fix Adam.load_state_dict to make optimizer state the right type. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 18 Aug, 2020 1 commit
-
-
Jun Ru Anderson authored
Allow training with optimizer state in fp16. Use an enum to select from full-precision, mixed precision, memory efficient mixed precision and pure fp16. Improve clarity of testing code Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
- 14 Aug, 2020 5 commits
-
-
Jun Ru Anderson authored
Add support for mixed-precision (half precision params, full precision gradients) and memory-efficient (half precision params and half precision gradients) training with Adam Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
Benjamin Lefaudeux authored
* hotfix a half-cooked optimizer state restoration, the global shared state also needs to be restored * [cleanup] get 100% coverage on oss.py (#38) authored-by:
Mandeep Singh Baines <msb@fb.com> * better unit testing, check that the .param_groups attribute is properly in sync with the loaded state Co-authored-by:
msbaines <35972327+msbaines@users.noreply.github.com>
-
msbaines authored
authored-by:Mandeep Singh Baines <msb@fb.com>
-
msbaines authored
* Set baseline coverage to 94%
-
msbaines authored
-
- 13 Aug, 2020 5 commits
-
-
msbaines authored
-
Jun Ru Anderson authored
-
msbaines authored
-
Jun Ru Anderson authored
Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
Benjamin Lefaudeux authored
Aligning OSS state dict with `https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer` (#31)
-
- 08 Aug, 2020 1 commit
-
-
Min Xu authored
Co-authored-by:Min Xu <m1n@fb.com>
-
- 06 Aug, 2020 2 commits
-
-
Min Xu authored
Co-authored-by:Min Xu <m1n@fb.com>
-
Min Xu authored
Co-authored-by:Min Xu <m1n@fb.com>
-
- 01 Aug, 2020 1 commit
-
-
msbaines authored
-
- 31 Jul, 2020 11 commits
-
-
msbaines authored
-
msbaines authored
-
Benjamin Lefaudeux authored
-
Jun Ru Anderson authored
Add FusedAdam, update benchmark and add tests. Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
Min Xu authored
-
msbaines authored
-
Tom Birch authored
-
msbaines authored
-
Jun Ru Anderson authored
Co-authored-by:Jun Ru Anderson <andersonic@fb.com>
-
Jun Ru Anderson authored
-
Jun Ru Anderson authored
-
- 08 Jul, 2020 1 commit
-
-
Mandeep Singh Baines authored
-