Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
apex
Commits
68364b49
Commit
68364b49
authored
Dec 14, 2021
by
Hubert Lu
Browse files
Conditionally define autocast_dtypes for different torch versions
parent
67ded2e2
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
2 deletions
+5
-2
tests/L0/run_fused_layer_norm/test_fused_layer_norm.py
tests/L0/run_fused_layer_norm/test_fused_layer_norm.py
+5
-2
No files found.
tests/L0/run_fused_layer_norm/test_fused_layer_norm.py
View file @
68364b49
...
...
@@ -75,8 +75,11 @@ def _prep_inputs(batch_size, normalized_shape, dtype):
native
=
fused
.
clone
().
to
(
dtype
).
requires_grad_
(
True
)
return
native
,
fused
autocast_dtypes
=
(
torch
.
half
,
torch
.
bfloat16
)
if
torch
.
cuda
.
is_bf16_supported
()
else
(
torch
.
half
,)
TORCH_MAJOR
,
TORCH_MINOR
=
int
(
torch
.
__version__
.
split
(
'.'
)[
0
]),
int
(
torch
.
__version__
.
split
(
'.'
)[
1
])
if
(
TORCH_MAJOR
<=
1
and
TORCH_MINOR
<
10
):
autocast_dtypes
=
(
torch
.
half
,)
else
:
autocast_dtypes
=
(
torch
.
half
,
torch
.
bfloat16
)
if
torch
.
cuda
.
is_bf16_supported
()
else
(
torch
.
half
,)
class
TestAutocastFusedLayerNorm
(
unittest
.
TestCase
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment