- 26 Apr, 2019 3 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
- 25 Apr, 2019 3 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
- 24 Apr, 2019 4 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
- 23 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 22 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 18 Apr, 2019 4 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
ptrblck authored
-
Glenn Jocher authored
-
- 17 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 16 Apr, 2019 5 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
- 15 Apr, 2019 3 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
- 12 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 11 Apr, 2019 7 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
henrymai authored
The main use of these functions (e.g.: `torch.{conv*, prelu}`) is via their `torch.nn` wrapping layers. The `torch.nn` layers are what contain the weights and call into these lower level functions with the weights as a parameter in their `forward()` method. The `torch.conv*` functions are already in the `FP16_CASTS` list due to amp's philosophy of casting the parameters rather than the model/layer weights. Conceptually `torch.prelu` is the same as the `torch.conv*` case, where its weight parameter is passed in from its wrapper layer `torch.nn.PReLU`. -
Michael Carilli authored
-
Michael Carilli authored
-
- 10 Apr, 2019 5 commits
-
-
ngimel authored
quick fix: make FusedLayerNorm compatible with cpu
-
Lam Dang authored
-
Lam Dang authored
-
Michael Carilli authored
-
Michael Carilli authored
-
- 09 Apr, 2019 1 commit
-
-
Michael Carilli authored
-
- 08 Apr, 2019 1 commit
-
-
Michael Carilli authored
-