Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
ColossalAI
Commits
54b3ad89
Unverified
Commit
54b3ad89
authored
Sep 27, 2023
by
littsk
Committed by
GitHub
Sep 27, 2023
Browse files
[hotfix] fix norm type error in zero optimizer (#4795)
parent
da15fdb9
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
colossalai/zero/low_level/_utils.py
colossalai/zero/low_level/_utils.py
+2
-2
No files found.
colossalai/zero/low_level/_utils.py
View file @
54b3ad89
...
@@ -221,8 +221,8 @@ def compute_norm(gradients: Tensor, dp_group: ProcessGroup, tp_group: ProcessGro
...
@@ -221,8 +221,8 @@ def compute_norm(gradients: Tensor, dp_group: ProcessGroup, tp_group: ProcessGro
else
:
else
:
total_norm
=
0.0
total_norm
=
0.0
for
g
in
gradients
:
for
g
in
gradients
:
param_norm
=
g
.
data
.
double
().
norm
(
2
)
param_norm
=
g
.
data
.
double
().
norm
(
norm_type
)
total_norm
+=
param_norm
.
item
()
**
2
total_norm
+=
param_norm
.
item
()
**
norm_type
# Sum across all model parallel GPUs.
# Sum across all model parallel GPUs.
total_norm_cuda
=
torch
.
cuda
.
FloatTensor
([
float
(
total_norm
)])
total_norm_cuda
=
torch
.
cuda
.
FloatTensor
([
float
(
total_norm
)])
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment