Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
f67dac97
Unverified
Commit
f67dac97
authored
May 25, 2023
by
Younes Belkada
Committed by
GitHub
May 25, 2023
Browse files
[`Nllb-Moe`] Fix nllb moe accelerate issue (#23758)
fix nllb moe accelerate issue
parent
d685e330
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
src/transformers/models/nllb_moe/modeling_nllb_moe.py
src/transformers/models/nllb_moe/modeling_nllb_moe.py
+1
-1
No files found.
src/transformers/models/nllb_moe/modeling_nllb_moe.py
View file @
f67dac97
...
@@ -856,7 +856,7 @@ class NllbMoePreTrainedModel(PreTrainedModel):
...
@@ -856,7 +856,7 @@ class NllbMoePreTrainedModel(PreTrainedModel):
config_class
=
NllbMoeConfig
config_class
=
NllbMoeConfig
base_model_prefix
=
"model"
base_model_prefix
=
"model"
supports_gradient_checkpointing
=
True
supports_gradient_checkpointing
=
True
_no_split_modules
=
[
"NllbMoe
Attention
"
]
_no_split_modules
=
[
"NllbMoe
EncoderLayer"
,
"NllbMoeDecoderLayer
"
]
def
_init_weights
(
self
,
module
):
def
_init_weights
(
self
,
module
):
"""Initialize the weights"""
"""Initialize the weights"""
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment