Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
tsoc
superbenchmark
Commits
1a86583b
Unverified
Commit
1a86583b
authored
Sep 28, 2021
by
guoshzhao
Committed by
GitHub
Sep 28, 2021
Browse files
Benchmarks: Fix bug - Fix bug when set force_fp32 option. (#214)
**Description** Fix typo when set force_fp32 option.
parent
f9442456
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
superbench/benchmarks/model_benchmarks/pytorch_base.py
superbench/benchmarks/model_benchmarks/pytorch_base.py
+2
-2
No files found.
superbench/benchmarks/model_benchmarks/pytorch_base.py
View file @
1a86583b
...
@@ -38,8 +38,8 @@ def _set_force_fp32(self):
...
@@ -38,8 +38,8 @@ def _set_force_fp32(self):
On Ampere or newer GPUs, pytorch and tensorflow will use TF32 instead of FP32 by default.
On Ampere or newer GPUs, pytorch and tensorflow will use TF32 instead of FP32 by default.
We can disable TF32 execution by setting force_fp32 as True.
We can disable TF32 execution by setting force_fp32 as True.
"""
"""
torch
.
backends
.
cuda
.
matmul
.
allow_tf32
=
self
.
_args
.
force_fp32
torch
.
backends
.
cuda
.
matmul
.
allow_tf32
=
not
self
.
_args
.
force_fp32
torch
.
backends
.
cudnn
.
allow_tf32
=
self
.
_args
.
force_fp32
torch
.
backends
.
cudnn
.
allow_tf32
=
not
self
.
_args
.
force_fp32
def
_init_distributed_setting
(
self
):
def
_init_distributed_setting
(
self
):
"""Initialize the distributed library and bind the worker to GPU.
"""Initialize the distributed library and bind the worker to GPU.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment