Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
fairscale
Commits
8dc2030b
Unverified
Commit
8dc2030b
authored
Mar 15, 2021
by
Benjamin Lefaudeux
Committed by
GitHub
Mar 15, 2021
Browse files
[fix] compute the grad norm in fp32 (#520)
parent
82986ca0
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
fairscale/optim/utils.py
fairscale/optim/utils.py
+2
-1
No files found.
fairscale/optim/utils.py
View file @
8dc2030b
...
...
@@ -120,5 +120,6 @@ def calc_grad_norm(parameters: List[torch.nn.Parameter], p: float) -> torch.Tens
if
p
==
inf
:
local_norm
=
max
(
par
.
grad
.
detach
().
abs
().
max
()
for
par
in
parameters
)
# type: ignore
else
:
local_norm
=
torch
.
norm
(
torch
.
stack
([
torch
.
norm
(
par
.
grad
.
detach
(),
p
)
for
par
in
parameters
]),
p
)
# type: ignore
# Compute the norm in full precision no matter what
local_norm
=
torch
.
norm
(
torch
.
stack
([
torch
.
norm
(
par
.
grad
.
detach
(),
p
,
dtype
=
torch
.
float32
)
for
par
in
parameters
]),
p
).
to
(
dtype
=
parameters
[
0
].
dtype
)
# type: ignore
return
local_norm
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment