Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
6fc17596
Unverified
Commit
6fc17596
authored
May 01, 2025
by
Stefan He
Committed by
GitHub
May 01, 2025
Browse files
Optimize a pad operation to accelerate 25us (#5945)
parent
ad506a4e
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
2 deletions
+3
-2
python/sglang/srt/layers/attention/flashattention_backend.py
python/sglang/srt/layers/attention/flashattention_backend.py
+3
-2
No files found.
python/sglang/srt/layers/attention/flashattention_backend.py
View file @
6fc17596
...
@@ -1587,8 +1587,9 @@ class FlashAttentionBackend(AttentionBackend):
...
@@ -1587,8 +1587,9 @@ class FlashAttentionBackend(AttentionBackend):
metadata
.
max_seq_len_k
=
max_len
metadata
.
max_seq_len_k
=
max_len
metadata
.
cache_seqlens_int32
=
seq_lens
.
to
(
torch
.
int32
)
metadata
.
cache_seqlens_int32
=
seq_lens
.
to
(
torch
.
int32
)
metadata
.
cu_seqlens_k
=
torch
.
nn
.
functional
.
pad
(
# Optimize cumulative sequence length calculation
torch
.
cumsum
(
seq_lens
,
dim
=
0
,
dtype
=
torch
.
int32
),
(
1
,
0
)
metadata
.
cu_seqlens_k
[
1
:].
copy_
(
torch
.
cumsum
(
seq_lens
,
dim
=
0
,
dtype
=
torch
.
int32
)
)
)
max_seq_pages
=
(
max_seq_pages
=
(
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment