- 10 May, 2019 1 commit
-
-
Michael Carilli authored
-
- 09 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 04 Apr, 2019 1 commit
-
-
mcarilli authored
* Refactor to allow more flexible treatment of multiple optimizers/models/losses * Adding _process_optimizers.py * Created L0 tests (now passing). * fix: minor print typo (#234) * make L1 results easier to read * L0 multiple model/optimizer/loss test fleshed out * Adding test that master params remain synced across distributed processes * Docstring updates * Docstring updates
-
- 19 Mar, 2019 1 commit
-
-
Michael Carilli authored
-
- 11 Mar, 2019 1 commit
-
-
Simon Layton authored
Fix dispatch where we have a parameter group with multiple combinations of types Optionally apply weight decay after momentum
-
- 10 Mar, 2019 1 commit
-
-
Michael Carilli authored
-
- 08 Mar, 2019 1 commit
-
-
Simon Layton authored
Initial implementation, all fp32 Tested against torch.optim.sgd
-
- 19 Feb, 2019 1 commit
-
-
Michael Carilli authored
-