- 26 Apr, 2019 4 commits
-
-
Michael Carilli authored
-
ptrblck authored
* change .type().ScalarType() to .scalar_type() + at::ScalarType::X to at::kX * revert scalar_type() to type() for AT_DISPATCH_FLOATING_TYPES_AND_HALF * revert scalar_type() to type() in AT_DISPATCH_FLOATING_TYPES * revert scalar_type() to type() for AT_DISPATCH_FLOATING_TYPES_AND_HALF in welford.cu * revert scalar_type() to type() in layer_norm_cuda_kernel.cu * revert at::kType to at::ScalarType::Type * use DISPATCH_FLOAT_AND_HALF to get rid of warnings * add dispatch mechanisms for double+float and double+float+half
-
Michael Carilli authored
-
Michael Carilli authored
-
- 25 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 22 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 18 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 12 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 10 Apr, 2019 2 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
- 09 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 08 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 04 Apr, 2019 1 commit
-
-
mcarilli authored
* Refactor to allow more flexible treatment of multiple optimizers/models/losses * Adding _process_optimizers.py * Created L0 tests (now passing). * fix: minor print typo (#234) * make L1 results easier to read * L0 multiple model/optimizer/loss test fleshed out * Adding test that master params remain synced across distributed processes * Docstring updates * Docstring updates
-
- 21 Mar, 2019 2 commits
-
-
Syed Tousif Ahmed authored
-
Syed Tousif Ahmed authored
-
- 19 Mar, 2019 2 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
- 15 Mar, 2019 1 commit
-
-
Michael Carilli authored
-
- 12 Mar, 2019 1 commit
-
-
Michael Carilli authored
-
- 11 Mar, 2019 2 commits
-
-
Simon Layton authored
-
Simon Layton authored
Fix dispatch where we have a parameter group with multiple combinations of types Optionally apply weight decay after momentum
-
- 10 Mar, 2019 2 commits
-
-
Natalia Gimelshein authored
-
Michael Carilli authored
-
- 09 Mar, 2019 1 commit
-
-
Simon Layton authored
-
- 08 Mar, 2019 5 commits
-
-
Simon Layton authored
-
Simon Layton authored
Incorrect types used in a few places
-
Simon Layton authored
Only support the 4 specific cases we care about Remove more general set of switch statements
-
Simon Layton authored
Fuse in fp16 gradient -> fp32 convert Additional option fp16 weight copy written out
-
Simon Layton authored
Initial implementation, all fp32 Tested against torch.optim.sgd
-
- 03 Mar, 2019 1 commit
-
-
Marek Kolodziej authored
-
- 28 Feb, 2019 1 commit
-
-
Michael Carilli authored
-
- 24 Feb, 2019 1 commit
-
-
Michael Carilli authored
-
- 22 Feb, 2019 1 commit
-
-
Michael Carilli authored
Allow multi-tensor unscale to handle FP16 output, so it can also be used for copy-scatter. Rename some options.
-
- 19 Feb, 2019 1 commit
-
-
Michael Carilli authored
-
- 13 Feb, 2019 1 commit
-
-
Michael Carilli authored
-
- 11 Feb, 2019 1 commit
-
-
Michael Carilli authored
-
- 08 Feb, 2019 1 commit
-
-
Michael Carilli authored
-
- 06 Feb, 2019 2 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
- 05 Feb, 2019 1 commit
-
-
Michael Carilli authored
-