[PyTorch] Use FP16 tols for distributed tests with TF32 compute (#1831)
* Use FP16 tols for tests with TF32 Signed-off-by:Tim Moon <tmoon@nvidia.com> * Use uniform init instead of constant init Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Revert constant init test, but reduce value Signed-off-by:
Tim Moon <tmoon@nvidia.com> --------- Signed-off-by:
Tim Moon <tmoon@nvidia.com> Signed-off-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com>
Showing
Please register or sign in to comment