Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
ab428207
Unverified
Commit
ab428207
authored
Feb 14, 2025
by
Aryan
Committed by
GitHub
Feb 13, 2025
Browse files
Refactor CogVideoX transformer forward (#10789)
update
parent
8d081de8
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
8 deletions
+1
-8
src/diffusers/models/transformers/cogvideox_transformer_3d.py
...diffusers/models/transformers/cogvideox_transformer_3d.py
+1
-8
No files found.
src/diffusers/models/transformers/cogvideox_transformer_3d.py
View file @
ab428207
...
@@ -503,14 +503,7 @@ class CogVideoXTransformer3DModel(ModelMixin, ConfigMixin, PeftAdapterMixin, Cac
...
@@ -503,14 +503,7 @@ class CogVideoXTransformer3DModel(ModelMixin, ConfigMixin, PeftAdapterMixin, Cac
attention_kwargs
=
attention_kwargs
,
attention_kwargs
=
attention_kwargs
,
)
)
if
not
self
.
config
.
use_rotary_positional_embeddings
:
hidden_states
=
self
.
norm_final
(
hidden_states
)
# CogVideoX-2B
hidden_states
=
self
.
norm_final
(
hidden_states
)
else
:
# CogVideoX-5B
hidden_states
=
torch
.
cat
([
encoder_hidden_states
,
hidden_states
],
dim
=
1
)
hidden_states
=
self
.
norm_final
(
hidden_states
)
hidden_states
=
hidden_states
[:,
text_seq_length
:]
# 4. Final block
# 4. Final block
hidden_states
=
self
.
norm_out
(
hidden_states
,
temb
=
emb
)
hidden_states
=
self
.
norm_out
(
hidden_states
,
temb
=
emb
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment