[PyTorch] Bump minimum cuDNN version for fused attention with FP8 current scaling (#2236)
* Require cuDNN 9.14.0+ for fused attention with FP8 current scaling Signed-off-by:Tim Moon <tmoon@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Tim Moon <tmoon@nvidia.com> Co-authored-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com>
Showing
Please register or sign in to comment