Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
46879364
Unverified
Commit
46879364
authored
Feb 08, 2024
by
Grigory Sizov
Committed by
GitHub
Feb 07, 2024
Browse files
Fix Windows build (#816)
parent
61a77724
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
csrc/flash_attn/flash_api.cpp
csrc/flash_attn/flash_api.cpp
+2
-2
No files found.
csrc/flash_attn/flash_api.cpp
View file @
46879364
...
...
@@ -696,8 +696,8 @@ mha_varlen_fwd(at::Tensor &q, // total_q x num_heads x head_size, total_q := \s
}
if
(
seqlenq_ngroups_swapped
)
{
long
size_before
[]
=
{
batch_size
,
max_seqlen_q
,
num_heads_k
,
head_size_og
};
long
size_after
[]
=
{
batch_size
,
num_heads_k
*
max_seqlen_q
,
head_size_og
};
int64_t
size_before
[]
=
{
batch_size
,
max_seqlen_q
,
num_heads_k
,
head_size_og
};
int64_t
size_after
[]
=
{
batch_size
,
num_heads_k
*
max_seqlen_q
,
head_size_og
};
out
=
out
.
reshape
(
size_before
).
transpose
(
1
,
2
).
reshape
(
size_after
);
out_padded
=
out_padded
.
reshape
(
size_before
).
transpose
(
1
,
2
).
reshape
(
size_after
);
q_padded
=
q_padded
.
reshape
(
size_before
).
transpose
(
1
,
2
).
reshape
(
size_after
);
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment