Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
7ddf8e83
Unverified
Commit
7ddf8e83
authored
Jun 16, 2025
by
Lianmin Zheng
Committed by
GitHub
Jun 16, 2025
Browse files
[EAGLE] Fix draft kv cache layout for fa3 and topk > 1 (#7239)
parent
8321f8e4
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
5 deletions
+5
-5
python/sglang/srt/layers/attention/flashattention_backend.py
python/sglang/srt/layers/attention/flashattention_backend.py
+5
-5
No files found.
python/sglang/srt/layers/attention/flashattention_backend.py
View file @
7ddf8e83
...
...
@@ -406,9 +406,10 @@ class FlashAttentionBackend(AttentionBackend):
dtype
=
torch
.
int32
,
device
=
device
,
)
# shape: [bs, num_steps, topk] -> [bs x topk, num_steps]
cache_loc
=
forward_batch
.
out_cache_loc
.
view
(
self
.
speculative_num_steps
,
-
1
)
.
T
.
contiguous
()
-
1
,
self
.
speculative_num_steps
)
metadata_expand
.
page_table
=
(
cache_loc
[:,
:
decode_length
].
contiguous
().
to
(
torch
.
int32
)
)
...
...
@@ -1636,9 +1637,8 @@ class FlashAttentionBackend(AttentionBackend):
# 2. The second half of metadata for draft tokens (per_batch_num_tokens = topk)
metadata_expand
=
self
.
draft_decode_metadata_topk_expand
[
bs
]
decode_length
=
self
.
speculative_step_id
+
1
cache_loc
=
out_cache_loc
.
view
(
self
.
speculative_num_steps
,
-
1
).
T
.
contiguous
()
# shape: [bs, num_steps, topk] -> [bs x topk, num_steps]
cache_loc
=
out_cache_loc
.
view
(
-
1
,
self
.
speculative_num_steps
)
metadata_expand
.
page_table
[:
cache_loc
.
shape
[
0
]].
copy_
(
cache_loc
[:,
:
decode_length
]
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment