Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
1c68f2ca
Unverified
Commit
1c68f2ca
authored
Jun 27, 2024
by
Sanchit Gandhi
Committed by
GitHub
Jun 27, 2024
Browse files
[HybridCache] Fix `get_seq_length` method (#31661)
* fix gemma2 * handle in generate
parent
464aa746
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
src/transformers/cache_utils.py
src/transformers/cache_utils.py
+1
-1
src/transformers/generation/utils.py
src/transformers/generation/utils.py
+1
-1
No files found.
src/transformers/cache_utils.py
View file @
1c68f2ca
...
...
@@ -1083,7 +1083,7 @@ class HybridCache(Cache):
# no matter how long the sentence is
return
self
.
max_cache_len
def
get_seq_length
(
self
,
layer_idx
:
Optional
[
int
]
=
0
)
->
int
:
def
get_seq_length
(
self
,
layer_idx
:
Optional
[
int
]
=
0
):
return
None
def
reset
(
self
):
...
...
src/transformers/generation/utils.py
View file @
1c68f2ca
...
...
@@ -1399,7 +1399,7 @@ class GenerationMixin:
cache
=
model_kwargs
[
"past_key_values"
]
if
not
isinstance
(
cache
,
Cache
):
past_length
=
cache
[
0
][
0
].
shape
[
2
]
elif
hasattr
(
cache
,
"get_seq_length"
):
elif
hasattr
(
cache
,
"get_seq_length"
)
and
cache
.
get_seq_length
()
is
not
None
:
past_length
=
cache
.
get_seq_length
()
if
"inputs_embeds"
in
model_kwargs
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment