Unverified Commit 0f587e80 authored by Wenxuan Tan's avatar Wenxuan Tan Committed by GitHub
Browse files

Use Tensor Core Decode when gqa group size >= 4 (#8624)


Co-authored-by: default avatargemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
parent 6078d5fc
......@@ -1263,11 +1263,12 @@ def should_use_tensor_core(
# Calculate GQA group size
gqa_group_size = num_attention_heads // num_kv_heads
# Determine based on dtype and GQA group size
# For Flashinfer, a GQA group size of at least 4 is needed to efficiently
# use Tensor Cores, as it fuses the head group with the token dimension in MMA.
if kv_cache_dtype in (torch.float8_e4m3fn, torch.float8_e5m2):
return True
elif kv_cache_dtype in (torch.float16, torch.half, torch.bfloat16):
return gqa_group_size > 4
return gqa_group_size >= 4
else:
return False
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment