Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
fairscale
Commits
b666d6a4
Unverified
Commit
b666d6a4
authored
Feb 12, 2021
by
Benjamin Lefaudeux
Committed by
GitHub
Feb 12, 2021
Browse files
Revert "[fix] oss dict load (#383)" (#384)
This reverts commit
8be9d930
.
parent
8be9d930
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
4 deletions
+4
-4
fairscale/optim/oss.py
fairscale/optim/oss.py
+4
-4
No files found.
fairscale/optim/oss.py
View file @
b666d6a4
...
...
@@ -391,16 +391,16 @@ class OSS(Optimizer):
# NOTE: PyTorch 1.5 does not index linearly but with the id(params) at saving time
# we work around that here by using the fact that the params are ordered as in the param_groups
pytorch15_index_redirect
=
{
k
:
i
for
i
,
k
in
enumerate
(
state_dict
[
"state"
].
keys
())}
for
key
,
value
in
state_dict
[
"state"
].
items
():
param
=
self
.
index_to_param
[
pytorch15_index_redirect
[
key
]
]
for
i_param
,
(
key
,
value
)
in
enumerate
(
state_dict
[
"state"
].
items
()
)
:
param
=
self
.
index_to_param
[
i_param
]
# Populate the sharded optimizer state on the fly
if
self
.
param_to_rank
[
param
]
!=
self
.
rank
:
state_dict
[
"state"
][
key
]
=
None
else
:
if
key
in
self
.
index_to_param
:
param
=
self
.
index_to_param
[
i_param
]
# Only add this state to the sharded optimizer if it owns this param
for
pg
in
self
.
optim
.
param_groups
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment