- 13 Mar, 2025 1 commit
-
-
Tim Moon authored
* Explicitly use python3 and pip3 Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Run pre-commit as Python module Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Replace some missed references to "python" or "pip" Signed-off-by:
Tim Moon <tmoon@nvidia.com> --------- Signed-off-by:
Tim Moon <tmoon@nvidia.com> Signed-off-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com>
-
- 12 Mar, 2025 1 commit
-
-
Charlene Yang authored
* fix dtypes in fused attn bwd for FP8 Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * add comments for dtypes Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * remove redundant qkv_dtype in fwd Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * remove Nones in bwd returns Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com>
-
- 07 Mar, 2025 1 commit
-
-
Xiaowei Ren authored
* fix recompilation of out and lse correction in p2p+bshd/sbhd Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix recompilation of get_seq_chunk_ids_for_reordering Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix recomplilation of reorder_seq_chunks_for_a2a Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * recover a change Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * typo fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * minor change to softmax_lse correction Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * cache cu_seqlens for BSHD/SBHD format Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * do not need to allocate out buffer for BSHD/SBHD Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code refactoring Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * minor fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * refactor init out correction Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix a docstring Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * typo fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code refactoring Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix init out correct dtype Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add pad_between_seqs to DPA API Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add pad_between_seqs to the API of MHA and transformer layer Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add pad_between_seqs to the API of MHA and transformer layer Signed-off-by:
Xiaowei Ren <xren@nvidia.com> --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 06 Mar, 2025 1 commit
-
-
Xiaowei Ren authored
Signed-off-by:Xiaowei Ren <xren@nvidia.com>
-
- 05 Mar, 2025 1 commit
-
-
Sérgio Agostinho authored
--------- Signed-off-by:Sérgio Agostinho <sagostinho@nvidia.com>
-
- 28 Feb, 2025 1 commit
-
-
Kirthi Shankar Sivamani authored
* Enforce torch 2.0 and run attn tests with torch.compile Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * replace torch.compile with jit_fuser Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * Fixes Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> --------- Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com>
-
- 25 Feb, 2025 1 commit
-
-
Charlene Yang authored
* minor fixes for attention Signed-off-by:
Charlene Yang <charleney@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Charlene Yang <charleney@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 20 Feb, 2025 1 commit
-
-
Xiaowei Ren authored
* commit some debug code Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add more debug info Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * debug code commit and typo fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * a typo fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * remove debug info Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * do not return lse Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add amax_per_step for quantizers of CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix FP8 + CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * bug fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * bug fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * dtype fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * bug fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by:
Xiaowei Ren <xren@login-preos01.a51.clusters.nvidia.com>
-
- 12 Feb, 2025 1 commit
-
-
Jaemin Choi authored
Signed-off-by:
Jaemin Choi <jaeminc@nvidia.com> Signed-off-by:
Tim Moon <tmoon@nvidia.com> Co-authored-by:
Jaemin Choi <jaeminc@nvidia.com> Co-authored-by:
Tim Moon <tmoon@nvidia.com>
-
- 07 Feb, 2025 1 commit
-
-
Przemek Tredak authored
Signed-off-by:Przemek Tredak <ptredak@nvidia.com>
-
- 28 Jan, 2025 1 commit
-
-
Sergii Dymchenko authored
This function is more accurate than torch.log() for small values of input - https://pytorch.org/docs/stable/generated/torch.log1p.html Found with TorchFix https://github.com/pytorch-labs/torchfix/ Signed-off-by:
Sergii Dymchenko <sdym@meta.com> Co-authored-by:
Xiaowei Ren <103958965+xrennvidia@users.noreply.github.com> Co-authored-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com>
-
- 21 Jan, 2025 1 commit
-
-
Charlene Yang authored
only compare the recipe in AttentionParams.fp8_meta Signed-off-by:Charlene Yang <8636796+cyanguwa@users.noreply.github.com>
-
- 10 Jan, 2025 1 commit
-
-
Xiaowei Ren authored
Take token count quantization of fused attention into consideration for CP results correction (#1396) * fix second half lse shape Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * bug fixes Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 08 Jan, 2025 1 commit
-
-
Xiaowei Ren authored
* make pad_between_seqs check do not consider padding at the end Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * change CP THD test to make it consider 0-length sequence Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * minor change to flash func name Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * only use varlen func of flash attention while qkv_format is THD Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * try to converge code of flash and fused attentions Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix bwd compute with P2P Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * remove redundant out_per_step view Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * enable cudnn>9.6 and THD+GQA Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * enable CP with FusedAttn+SWA+All_Gather Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * enable CP with FusedAttn+SWA+All_Gather Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code cleaning for cu_seqlens Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix some pylint error Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * minor import change for pylint Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * more fix for pylint Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix lse_seqlen in thd out correction Signed-off-by:
Xiaowei Ren <xren@nvidia.com> --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 02 Jan, 2025 1 commit
-
-
Kirthi Shankar Sivamani authored
Signed-off-by:Kirthi Shankar Sivamani <ksivamani@nvidia.com>
-
- 20 Dec, 2024 1 commit
-
-
Charlene Yang authored
* add swa (left,0) + padding + brcm support Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * final fixes Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * upgrade to FE 1.9-rc Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * fix jax tests Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * skip thd + CP + fused attn tests for cuDNN 9.6+ due to different stats shapes Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 18 Dec, 2024 1 commit
-
-
Charlene Yang authored
* WIP: fix get_swa_mask for padding Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * fix mask type setting Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * fix the order of checking valid swa and changing mask type Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix lint Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * revamp to get full mask Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 05 Dec, 2024 1 commit
-
-
Xiaowei Ren authored
* always have padding mask type for both flash and fused attentions Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * remove an redundant assert Signed-off-by:
Xiaowei Ren <xren@nvidia.com> --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com>
-
- 20 Nov, 2024 1 commit
-
-
Charlene Yang authored
* fix GQA error message Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 14 Nov, 2024 1 commit
-
-
Kirthi Shankar Sivamani authored
* Limit to one call of ctx.saved_tensors per autograd bwd Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 04 Nov, 2024 1 commit
-
-
Xin Yao authored
* Let fp8 mha work with rope when cp is on Signed-off-by:
Xin Yao <xiny@nvidia.com> * fix and update ut Signed-off-by:
Xin Yao <xiny@nvidia.com> --------- Signed-off-by:
Xin Yao <xiny@nvidia.com>
-
- 30 Oct, 2024 1 commit
-
-
Xiaowei Ren authored
* add missed arguments of apply_rotary_pos_emb in MHA Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove an unnecessary f Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add one more assert for cp_group len Signed-off-by:
Xiaowei Ren <xren@nvidia.com> --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 29 Oct, 2024 1 commit
-
-
Charlene Yang authored
* check if GPU is available Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 25 Oct, 2024 1 commit
-
-
Charlene Yang authored
* WIP: add max_t support for THD Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * WIP: save tensors for debug and point to new FE Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix stats in bwd Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix stats in fwd Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add docstring for DPA Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * add docstring Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * WIP: first try on adding max_b and max_t Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Revert "[pre-commit.ci] auto fixes from pre-commit.com hooks" This reverts commit c3d522e9f5aef3c8ddfec5bf6ff24c3db97bb059. Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * Revert "WIP: first try on adding max_b and max_t" This reverts commit 3bc01ebaf2aa846fd16634e2d33b0d0f5803a076. Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * update docstring and fix max_seqlen logic for thd Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * revert two lines of change in docstring Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * WIP: add get_max_b/t Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix max_seqlen code and docstring Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * sucess: add max_b/max_t Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * remove debug code Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * change max_b/max_t buckets Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix b vs orig_b Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * fix b vs orig_b with 0 fill Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * update FE for T3HD/TH3D Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * add max_b to conversion kernels Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix lint Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix changes after last merge Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add Jax support for max_t Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update FE to 1.8.0-rc Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * update FE to 1.8.0 Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * code review/formating fixes Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix Stats shape for <9.6 Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * return nullptr for offset_stats when cudnn < 9.6 Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add more version control Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 17 Oct, 2024 1 commit
-
-
Xiaowei Ren authored
fix seq_dim in CP implementation Signed-off-by:Xiaowei Ren <xren@nvidia.com>
-
- 16 Oct, 2024 2 commits
-
-
Kirthi Shankar Sivamani authored
* Upgrade pylint and first round formatting Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * round 2 Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * round 3 Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * Format and fixes Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * Paddle lint Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * Reviews Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * FIxes Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * More linting Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * Run formatter Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * Paddle lint Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> * Fixes Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> --------- Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com>
-
Charlene Yang authored
* WIP: make FA2 optional Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * WIP: fix logic Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix lint Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * minor fixes Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * minor tweak Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * add L1 test to test all supported FA versions Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * update version to 2.1.1 and trim L1 tests Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * update onnxruntime version Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * remove onnxruntime from L1 FA versions tests Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 12 Oct, 2024 1 commit
-
-
Xin Yao authored
* Let Fused RoPE support THD with CP Signed-off-by:
Xin Yao <xiny@nvidia.com> * add comment Signed-off-by:
Xin Yao <xiny@nvidia.com> --------- Signed-off-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Xiaowei Ren <103958965+xrennvidia@users.noreply.github.com>
-
- 11 Oct, 2024 2 commits
-
-
Xiaowei Ren authored
* fa2 function import renaming Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * refine fa_fwd_kwargs and fa_bwd_kwargs Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * import FA3 fucntions for CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix output of FA3 fwd Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix rng_state in a2a implementation with FA3 Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * hack lse correction for packed lse format Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * make CP thd out correction work with packed lse Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix for packed softmax_lse Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix softmax_lse shape Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change lse_packed to constexpr Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com>
-
李金梁 authored
* Fix bug in torch compile and seqdim is integer Signed-off-by:
李金梁 <975761915@qq.com> * Update attention.py change the jit_fuser to torch.compile on flash_attn_fwd_out_correction Signed-off-by:
李金梁 <975761915@qq.com> * Annotate fused functions Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> --------- Signed-off-by:
李金梁 <975761915@qq.com> Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> Co-authored-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com>
-
- 09 Oct, 2024 2 commits
-
-
Charlene Yang authored
* improve get_attention_backend logic Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * polish logic and wording Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove redundant comment Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Charlene Yang authored
* add extra_state change description for different TE versions Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * add FAQ page Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * update FAQ page Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * fix extra_state tests Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * minor fixes Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 08 Oct, 2024 1 commit
-
-
Charlene Yang authored
* add qkv descales to FA3 Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix sbhd shapes Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * force the same dtype when comparing FA3 and cuDNN FP8 Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * Revert "force the same dtype when comparing FA3 and cuDNN FP8" This reverts commit 19e7f877026a19a32d2f02c6c9de20df4ae2e064. Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * force the same dtype when comparing FA3 and cuDNN FP8 Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * add try/except for FA3 when custom qkv descales are not supported Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * replace FA3 installation warning with a debug logging message Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix lint Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove unused imports Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * avoid varlen_func for FP8 and improve messaging Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add SWA support for FA3 Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * fix lint Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change preference reason for FP8 logic Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * minor fixes Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * minor fix Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> --------- Signed-off-by:
Charlene Yang <8636796+cyanguwa@users.noreply.github.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 07 Oct, 2024 1 commit
-
-
Xiaowei Ren authored
* change API for hierarchical CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * move fp8 code before qkv reshape Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * try to insert A2A for hierarchical CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * make fwd work Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * remove a redundant sync Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * make bwd of hierarchical CP work Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix dout a2a in bwd Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix q_f16 with fp8 Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * assert hierarchical CP implementation does not support THD format Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * bug fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * assert hierarchical CP does not support attn bias Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add unit test for hierarchical CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix cp_comm_type in unit test Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * bug fix and code cleaning Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * minor change Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * an assert info change Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * dout shape fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * move function definitions to the front of the first call Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix tensor view comments Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * refine CP unit test Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * typo fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * typo fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * save cp_size_a2a and rank_a2a in fwd Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add more explainations of cp_group in doc_string Signed-off-by:
Xiaowei Ren <xren@nvidia.com> --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 03 Oct, 2024 1 commit
-
-
Charlene Yang authored
move block_table arg to varlen_func section Signed-off-by:Charlene Yang <8636796+cyanguwa@users.noreply.github.com>
-
- 27 Sep, 2024 1 commit
-
-
Paweł Gadziński authored
* Docs fixes Signed-off-by:
Pawel Gadzinski <pgadzinski@nvidia.com> * docs fix Signed-off-by:
Pawel Gadzinski <pgadzinski@nvidia.com> * docs fix Signed-off-by:
Pawel Gadzinski <pgadzinski@nvidia.com> --------- Signed-off-by:
Pawel Gadzinski <pgadzinski@nvidia.com> Co-authored-by:
Pawel Gadzinski <pgadzinski@nvidia.com>
-
- 19 Sep, 2024 1 commit
-
-
Xin Yao authored
* relax contiguous check for flash attention Signed-off-by:
Xin Yao <xiny@nvidia.com> * force contiguous for cp Signed-off-by:
Xin Yao <xiny@nvidia.com> --------- Signed-off-by:
Xin Yao <xiny@nvidia.com>
-
- 18 Sep, 2024 1 commit
-
-
Sudhakar Singh authored
* make rotary_base arg Signed-off-by:
Sudhakar Singh <sudhakars@nvidia.com> * rotary base can be a float Co-authored-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com> Signed-off-by:
Sudhakar Singh <sudhakars@nvidia.com> --------- Signed-off-by:
Sudhakar Singh <sudhakars@nvidia.com> Co-authored-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com>
-
- 09 Sep, 2024 1 commit
-
-
Xiaowei Ren authored
* clean code for CP function args Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add a placeholder for Ulysses implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * commit code change to CP+A2A Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * finish the draft fwd implementation of Ulysses Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add draft bwd implementation of Ulysses Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * make swa work with ulysses Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * commit FP8 code for Ulysses Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix qkv type in the bwd of FP8+CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * typo fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix qkv_dtype of FP8+CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code refactoring Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * minor code change Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * config cp correction dtype of FP8+CP Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * code style change Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * save chunk_ids Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * try to make Ulysses A2A async Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * make more a2a async Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix a2a_outputs Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix chunk_ids generation for A2A Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * avoid code duplication of a2a before attn Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * remove code duplication of a2a after attn Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add cp_stream in A2A implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * bug fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix qkv of fp8_fwd + bf16_bwd Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix kernel order in cp a2a communication Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code cleaning for CP a2a Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix merging with main Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix a2a communication order Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * adjust sequence chunk reordering for a2a Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add docstring for A2A implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * change an assert info Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add unit tests of A2A implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add more A2A unit test Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix CP unit tests Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add more cp unit tests Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix window size of no_mask Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fused attn does not support swa+no_mask Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * change num_gqa_groups to 2 for A2A implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * function and variable renaming Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code cleaning for CP all-gather implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * some function renaming Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * remove redundant code Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * commit code change for kv all-gather implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix all-gather implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add a window size check Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code cleaning Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add unit test of all_gather+no_mask Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix all-gather cp implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code cleaning Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * code format fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * code format fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix FP8 with A2A implementation Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * add paper references to CP implementations with all-gather and all-to-all Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * change pdf to abs Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * elaborate cp_comm_type Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * fix CP docstring Signed-off-by:
Xiaowei Ren <xren@nvidia.com> --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 05 Sep, 2024 1 commit
-
-
Selvaraj Anandaraj authored
* Added offloading support FP8 attention Signed-off-by:
Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com> * Update transformer_engine/pytorch/attention.py Co-authored-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> Signed-off-by:
Selvaraj Anandaraj <anandaraj@wisc.edu> * Fix Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> --------- Signed-off-by:
Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com> Signed-off-by:
Selvaraj Anandaraj <anandaraj@wisc.edu> Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> Co-authored-by:
Selvaraj Anandaraj <selvaraja@login-eos02.eos.clusters.nvidia.com> Co-authored-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com>
-