Unverified Commit 1a86583b authored by guoshzhao's avatar guoshzhao Committed by GitHub
Browse files

Benchmarks: Fix bug - Fix bug when set force_fp32 option. (#214)

**Description**
Fix typo when set force_fp32 option.
parent f9442456
......@@ -38,8 +38,8 @@ def _set_force_fp32(self):
On Ampere or newer GPUs, pytorch and tensorflow will use TF32 instead of FP32 by default.
We can disable TF32 execution by setting force_fp32 as True.
"""
torch.backends.cuda.matmul.allow_tf32 = self._args.force_fp32
torch.backends.cudnn.allow_tf32 = self._args.force_fp32
torch.backends.cuda.matmul.allow_tf32 = not self._args.force_fp32
torch.backends.cudnn.allow_tf32 = not self._args.force_fp32
def _init_distributed_setting(self):
"""Initialize the distributed library and bind the worker to GPU.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment