Unverified Commit e37e33e1 authored by Kirthi Shankar Sivamani's avatar Kirthi Shankar Sivamani Committed by GitHub
Browse files

Disallow pure E5M2 recipe for `Float8BlockScaling` (#2251)



Catch unsupported GEMM during recipe init
Signed-off-by: default avatarKirthi Shankar Sivamani <ksivamani@nvidia.com>
parent af2a0c16
......@@ -363,6 +363,7 @@ class Float8BlockScaling(Recipe):
assert (
not self.fp8_dpa and not self.fp8_mha
), "FP8 attention is not supported for Float8BlockScaling."
assert self.fp8_format != Format.E5M2, "Pure E5M2 training is not supported."
def __repr__(self) -> str:
return (
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment