Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
1392a686
Unverified
Commit
1392a686
authored
Jul 24, 2024
by
Raushan Turganbay
Committed by
GitHub
Jul 24, 2024
Browse files
Fix resize embedding with Deepspeed (#32192)
fix resize when deepspeed
parent
8d2534c4
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
16 additions
and
6 deletions
+16
-6
src/transformers/modeling_utils.py
src/transformers/modeling_utils.py
+16
-6
No files found.
src/transformers/modeling_utils.py
View file @
1392a686
...
@@ -2131,13 +2131,23 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
...
@@ -2131,13 +2131,23 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
# Replace weights in old_embeddings and return to maintain the same embedding type.
# Replace weights in old_embeddings and return to maintain the same embedding type.
# This ensures correct functionality when a Custom Embedding class is passed as input.
# This ensures correct functionality when a Custom Embedding class is passed as input.
# The input and output embedding types remain consistent. (c.f. https://github.com/huggingface/transformers/pull/31979)
# The input and output embedding types remain consistent. (c.f. https://github.com/huggingface/transformers/pull/31979)
old_embeddings
.
weight
.
data
=
new_embeddings
.
weight
.
data
if
is_deepspeed_zero3_enabled
()
and
not
is_quantized
:
old_embeddings
.
num_embeddings
=
new_embeddings
.
weight
.
data
.
shape
[
0
]
import
deepspeed
params
=
[
old_embeddings
.
weight
,
new_embeddings
.
weight
]
with
deepspeed
.
zero
.
GatheredParameters
(
params
,
modifier_rank
=
0
):
old_embeddings
.
weight
.
data
=
new_embeddings
.
weight
.
data
old_embeddings
.
num_embeddings
=
new_embeddings
.
weight
.
data
.
shape
[
0
]
# If the new number of tokens is smaller than the original `padding_idx`, the `padding_idx`
# If the new number of tokens is smaller than the original `padding_idx`, the `padding_idx`
# will be set to `None` in the resized embeddings.
# will be set to `None` in the resized embeddings.
if
old_embeddings
.
padding_idx
is
not
None
and
(
new_num_tokens
-
1
)
<
old_embeddings
.
padding_idx
:
if
old_embeddings
.
padding_idx
is
not
None
and
(
new_num_tokens
-
1
)
<
old_embeddings
.
padding_idx
:
old_embeddings
.
padding_idx
=
None
old_embeddings
.
padding_idx
=
None
else
:
old_embeddings
.
weight
.
data
=
new_embeddings
.
weight
.
data
old_embeddings
.
num_embeddings
=
new_embeddings
.
weight
.
data
.
shape
[
0
]
if
old_embeddings
.
padding_idx
is
not
None
and
(
new_num_tokens
-
1
)
<
old_embeddings
.
padding_idx
:
old_embeddings
.
padding_idx
=
None
return
old_embeddings
return
old_embeddings
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment