Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
apex
Commits
8e5699e4
Commit
8e5699e4
authored
Apr 02, 2020
by
Kexin Yu
Browse files
more debugging
parent
9b96c824
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
8 additions
and
6 deletions
+8
-6
apex/contrib/optimizers/fused_lamb.py
apex/contrib/optimizers/fused_lamb.py
+8
-6
No files found.
apex/contrib/optimizers/fused_lamb.py
View file @
8e5699e4
...
...
@@ -118,13 +118,15 @@ class FusedLAMB(torch.optim.Optimizer):
raise
RuntimeError
(
'FusedLAMB only support fp16 and fp32.'
)
print
(
"====after collect"
)
print
(
"====g_all_32:"
,
g_all_32
)
print
(
"====g_all_16:"
,
g_all_16
)
# compute grad norm for two lists
g_norm_32
,
_
=
multi_tensor_applier
(
self
.
multi_tensor_l2norm
,
self
.
_dummy_overflow_buf
,
[
g_all_32
],
Fals
e
)
g_norm_16
,
_
=
multi_tensor_applier
(
self
.
multi_tensor_l2norm
,
self
.
_dummy_overflow_buf
,
[
g_all_16
],
Fals
e
)
g_norm_32
,
norm_per_tensor
=
multi_tensor_applier
(
self
.
multi_tensor_l2norm
,
self
.
_dummy_overflow_buf
,
[
g_all_32
],
Tru
e
)
g_norm_16
,
norm_per_tensor
=
multi_tensor_applier
(
self
.
multi_tensor_l2norm
,
self
.
_dummy_overflow_buf
,
[
g_all_16
],
Tru
e
)
print
(
"====after multi_tensor_l2norm"
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment