Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
4a782f46
Unverified
Commit
4a782f46
authored
Jul 25, 2024
by
Sanchit Gandhi
Committed by
GitHub
Jul 25, 2024
Browse files
[AudioLDM2] Fix cache pos for GPT-2 generation (#8964)
Co-authored-by:
Sayak Paul
<
spsayakpaul@gmail.com
>
parent
cdd12bde
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
0 deletions
+1
-0
src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py
src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py
+1
-0
No files found.
src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py
View file @
4a782f46
...
@@ -286,6 +286,7 @@ class AudioLDM2Pipeline(DiffusionPipeline):
...
@@ -286,6 +286,7 @@ class AudioLDM2Pipeline(DiffusionPipeline):
The sequence of generated hidden-states.
The sequence of generated hidden-states.
"""
"""
max_new_tokens
=
max_new_tokens
if
max_new_tokens
is
not
None
else
self
.
language_model
.
config
.
max_new_tokens
max_new_tokens
=
max_new_tokens
if
max_new_tokens
is
not
None
else
self
.
language_model
.
config
.
max_new_tokens
model_kwargs
=
self
.
language_model
.
_get_initial_cache_position
(
inputs_embeds
,
model_kwargs
)
for
_
in
range
(
max_new_tokens
):
for
_
in
range
(
max_new_tokens
):
# prepare model inputs
# prepare model inputs
model_inputs
=
prepare_inputs_for_generation
(
inputs_embeds
,
**
model_kwargs
)
model_inputs
=
prepare_inputs_for_generation
(
inputs_embeds
,
**
model_kwargs
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment