Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
3802e786
Unverified
Commit
3802e786
authored
May 17, 2024
by
Darshana S
Committed by
GitHub
May 17, 2024
Browse files
Enable device map (#30870)
* added_no_split_modules * added LlavaNextVisionAttention to _no_split_modules
parent
57c965a8
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
0 deletions
+1
-0
src/transformers/models/video_llava/modeling_video_llava.py
src/transformers/models/video_llava/modeling_video_llava.py
+1
-0
No files found.
src/transformers/models/video_llava/modeling_video_llava.py
View file @
3802e786
...
@@ -124,6 +124,7 @@ class VideoLlavaPreTrainedModel(PreTrainedModel):
...
@@ -124,6 +124,7 @@ class VideoLlavaPreTrainedModel(PreTrainedModel):
supports_gradient_checkpointing
=
True
supports_gradient_checkpointing
=
True
_skip_keys_device_placement
=
"past_key_values"
_skip_keys_device_placement
=
"past_key_values"
_supports_flash_attn_2
=
True
_supports_flash_attn_2
=
True
_no_split_modules
=
[
"VideoLlavaVisionAttention"
]
def
_init_weights
(
self
,
module
):
def
_init_weights
(
self
,
module
):
# important: this ported version of VideoLlava isn't meant for training from scratch - only
# important: this ported version of VideoLlava isn't meant for training from scratch - only
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment