Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
AutoAWQ
Commits
1b54b9f9
Unverified
Commit
1b54b9f9
authored
Oct 07, 2023
by
Casper
Committed by
GitHub
Oct 07, 2023
Browse files
Merge pull request #96 from casper-hansen/fix_attention_mask
Only apply attention mask if seqlen is greater than 1
parents
0baf5e18
e94b7f40
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
awq/modules/fused/attn.py
awq/modules/fused/attn.py
+2
-1
No files found.
awq/modules/fused/attn.py
View file @
1b54b9f9
...
@@ -176,7 +176,8 @@ class QuantAttentionFused(nn.Module):
...
@@ -176,7 +176,8 @@ class QuantAttentionFused(nn.Module):
if
self
.
use_alibi
:
if
self
.
use_alibi
:
scores
=
self
.
alibi
.
forward
(
scores
,
seqlen
)
scores
=
self
.
alibi
.
forward
(
scores
,
seqlen
)
if
attention_mask
is
not
None
:
# When seqlen is 1, there is nothing else to attend to
if
attention_mask
is
not
None
and
seqlen
>
1
:
scores
=
scores
+
attention_mask
# (bs, n_local_heads, slen, cache_len + slen)
scores
=
scores
+
attention_mask
# (bs, n_local_heads, slen, cache_len + slen)
scores
=
F
.
softmax
(
scores
.
float
(),
dim
=-
1
).
type_as
(
xq
)
scores
=
F
.
softmax
(
scores
.
float
(),
dim
=-
1
).
type_as
(
xq
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment