Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
d23d2c27
Unverified
Commit
d23d2c27
authored
Jul 28, 2023
by
jiqing-feng
Committed by
GitHub
Jul 28, 2023
Browse files
Represent query_length in a different way to solve jit issue (#25164)
Fix jit trace
parent
2a787201
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
3 deletions
+1
-3
src/transformers/models/mpt/modeling_mpt.py
src/transformers/models/mpt/modeling_mpt.py
+1
-3
No files found.
src/transformers/models/mpt/modeling_mpt.py
View file @
d23d2c27
...
@@ -154,9 +154,7 @@ class MptAttention(nn.Module):
...
@@ -154,9 +154,7 @@ class MptAttention(nn.Module):
attention_scores
=
torch
.
matmul
(
query_states
,
key_states
.
transpose
(
-
1
,
-
2
))
*
self
.
softmax_scale
attention_scores
=
torch
.
matmul
(
query_states
,
key_states
.
transpose
(
-
1
,
-
2
))
*
self
.
softmax_scale
query_length
=
seq_length
query_length
=
seq_length
if
past_key_value
is
None
else
seq_length
+
past_key_value
[
0
].
shape
[
2
]
if
past_key_value
is
not
None
:
query_length
+=
past_key_value
[
0
].
shape
[
2
]
if
position_bias
is
not
None
:
if
position_bias
is
not
None
:
if
len
(
position_bias
.
shape
)
!=
3
:
if
len
(
position_bias
.
shape
)
!=
3
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment