Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
apex
Commits
cec08a41
Commit
cec08a41
authored
May 11, 2020
by
rohithkrn
Browse files
revert to original
parent
3ff2178c
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
apex/amp/amp.py
apex/amp/amp.py
+1
-1
No files found.
apex/amp/amp.py
View file @
cec08a41
...
@@ -124,7 +124,7 @@ def init(enabled=True, loss_scale="dynamic", patch_type=torch.float16, enable_ca
...
@@ -124,7 +124,7 @@ def init(enabled=True, loss_scale="dynamic", patch_type=torch.float16, enable_ca
# 1.5) Pre-0.4, put the blacklist methods on HalfTensor and whitelist
# 1.5) Pre-0.4, put the blacklist methods on HalfTensor and whitelist
# methods on FloatTensor, since they're distinct types.
# methods on FloatTensor, since they're distinct types.
if
compat
.
tensor_is_float_tensor
():
if
compat
.
tensor_is_float_tensor
():
for
fn
in
getattr
(
tensor_overrides
,
'
FP16_FUNCS
'
)
:
for
fn
in
tensor_overrides
.
FP16_FUNCS
:
wrap
.
cached_cast
(
torch
.
cuda
.
FloatTensor
,
fn
,
utils
.
maybe_half
,
wrap
.
cached_cast
(
torch
.
cuda
.
FloatTensor
,
fn
,
utils
.
maybe_half
,
handle
,
try_caching
=
True
,
verbose
=
verbose
)
handle
,
try_caching
=
True
,
verbose
=
verbose
)
for
fn
in
tensor_overrides
.
FP32_FUNCS
:
for
fn
in
tensor_overrides
.
FP32_FUNCS
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment