Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
8bf6d28c
Unverified
Commit
8bf6d28c
authored
Apr 05, 2022
by
Francesco Saverio Zuppichini
Committed by
GitHub
Apr 05, 2022
Browse files
made _load_pretrained_model_low_mem static + bug fix (#16548)
parent
02214cb3
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
3 deletions
+3
-3
src/transformers/modeling_utils.py
src/transformers/modeling_utils.py
+3
-3
No files found.
src/transformers/modeling_utils.py
View file @
8bf6d28c
...
...
@@ -2103,8 +2103,8 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
return
retrieved_modules
@
class
method
def
_load_pretrained_model_low_mem
(
cls
,
model
,
loaded_state_dict_keys
,
resolved_archive_file
):
@
static
method
def
_load_pretrained_model_low_mem
(
model
,
loaded_state_dict_keys
,
resolved_archive_file
):
"""
This is an experimental function that loads the model using ~1.x model size CPU memory
...
...
@@ -2159,7 +2159,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
resolved_archive_file
=
[
resolved_archive_file
]
for
archive_file
in
resolved_archive_file
:
state_dict
=
torch
.
load
(
resolved_
archive_file
,
map_location
=
"cpu"
)
state_dict
=
torch
.
load
(
archive_file
,
map_location
=
"cpu"
)
# materialize state_dict entries one by one on CPU
for
k
in
loaded_state_dict_keys
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment