Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
fairscale
Commits
11beea69
Unverified
Commit
11beea69
authored
Jan 08, 2021
by
Benjamin Lefaudeux
Committed by
GitHub
Jan 08, 2021
Browse files
[perf][minor] ShardedDDP micro-optim (#296)
* minor, not life changing but removing a dependency on runtime optim
parent
3d02f052
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
fairscale/nn/data_parallel/sharded_ddp.py
fairscale/nn/data_parallel/sharded_ddp.py
+2
-2
No files found.
fairscale/nn/data_parallel/sharded_ddp.py
View file @
11beea69
...
...
@@ -68,7 +68,7 @@ class ShardedDataParallel(nn.Module):
# Communication related attributes
self
.
process_group
=
process_group
if
process_group
is
not
None
else
dist
.
group
.
WORLD
self
.
world_size
=
dist
.
get_world_size
(
self
.
process_group
)
self
.
world_size
_scaling
=
1.0
/
dist
.
get_world_size
(
self
.
process_group
)
# > 0
self
.
reference_global_rank
=
OSS
.
get_global_rank
(
self
.
process_group
,
0
)
# picking rank 0 as the reference
self
.
rank
=
dist
.
get_rank
(
self
.
process_group
)
self
.
global_rank
=
OSS
.
get_global_rank
(
self
.
process_group
,
self
.
rank
)
...
...
@@ -185,7 +185,7 @@ class ShardedDataParallel(nn.Module):
# Make sure that this is not fired twice
self
.
_grad_to_be_reduced
[
index
]
=
False
param
.
grad
/=
self
.
world_size
param
.
grad
.
mul_
(
self
.
world_size
_scaling
)
# Future work includes clearing up the buffer if possible
def
cleanup
()
->
None
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment