- 27 Aug, 2025 4 commits
-
-
yuguo authored
Merge branch 'develop_v2.7' of http://10.16.6.30/dcutoolkit/deeplearing/TransformerEngine into release_v2.7
-
yuguo authored
-
-
yuguo authored
Merge commit '734bcedd' of https://github.com/NVIDIA/TransformerEngine
-
- 26 Aug, 2025 13 commits
-
-
jberchtold-nvidia authored
Revert "[Common] PDL for Blockwise Quantization (#2066)" This reverts commit ebca6153 . Signed-off-by:
Jeremy Berchtold <jberchtold@nvidia.com>
-
vcherepanov-nv authored
* Bump cuDNN FE to 1.14.0 Signed-off-by:
Vladimir Cherepanov <vcherepanov@nvidia.com> * Change submodule hash Signed-off-by:
Vladimir Cherepanov <vcherepanov@nvidia.com> * Pick up a cuDNN FE fix Signed-off-by:
Vladimir Cherepanov <vcherepanov@nvidia.com> * New model configs in tests Signed-off-by:
Vladimir Cherepanov <vcherepanov@nvidia.com> * Exclude cuDNN backend for some configs Signed-off-by:
Vladimir Cherepanov <vcherepanov@nvidia.com> --------- Signed-off-by:
Vladimir Cherepanov <vcherepanov@nvidia.com>
-
jberchtold-nvidia authored
Revert "[Common] PDL for Quantization Kernels (#2001)" This reverts commit bfab8c67 . Signed-off-by:
Jeremy Berchtold <jberchtold@nvidia.com>
-
Phuong Nguyen authored
* added shardy warning Signed-off-by:
Phuong Nguyen <phuonguyen@nvidia.com> --------- Signed-off-by:
Phuong Nguyen <phuonguyen@nvidia.com>
-
Tim Moon authored
* Return dummy wgrad tensors when requested by Mcore Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Apply suggestions from code review Co-authored-by:
Jan Bielak <janekb04@icloud.com> Signed-off-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com> --------- Signed-off-by:
Tim Moon <tmoon@nvidia.com> Signed-off-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com> Co-authored-by:
Jan Bielak <janekb04@icloud.com>
-
Md Fahim Faysal Khan authored
* added cp strategy arg to DPA api Signed-off-by:
Md Fahim Faysal Khan <mdfahimfaysa@nvidia.com> * converted DPA cp_strategy to string Signed-off-by:
Md Fahim Faysal Khan <mdfahimfaysa@nvidia.com> --------- Signed-off-by:
Md Fahim Faysal Khan <mdfahimfaysa@nvidia.com>
-
Tim Moon authored
* Fix incorrect version checks for atomic GEMM Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Fix typo Signed-off-by:
Tim Moon <tmoon@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Signed-off-by:
Tim Moon <tmoon@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Tim Moon authored
Avoid garbage collection when capturing a CUDA Graph Signed-off-by:Tim Moon <tmoon@nvidia.com>
-
jberchtold-nvidia authored
[JAX] Error checking for mesh resource and update GemmPrimitive to use global_mesh_resource().fsdp_resource (#2088) * Enforce global MeshResource is set Signed-off-by:
Jeremy Berchtold <jberchtold@nvidia.com> * Use global_mesh_resource().fsdp_resource in gemm primitive Signed-off-by:
Jeremy Berchtold <jberchtold@nvidia.com> * Update tests Signed-off-by:
Jeremy Berchtold <jberchtold@nvidia.com> * Update gemm.py Signed-off-by:
Jeremy Berchtold <jberchtold@nvidia.com> * Update test_layer.py Signed-off-by:
Jeremy Berchtold <jberchtold@nvidia.com> --------- Signed-off-by:
Jeremy Berchtold <jberchtold@nvidia.com>
-
-
yuguo authored
-
-
yuguo authored
-
- 25 Aug, 2025 4 commits
- 23 Aug, 2025 3 commits
- 21 Aug, 2025 4 commits
- 20 Aug, 2025 1 commit
-
-
yuguo authored
add swap env See merge request dcutoolkit/deeplearing/TransformerEngine!40
-
- 19 Aug, 2025 1 commit
-
-
evt_fugx1 authored
-
- 18 Aug, 2025 6 commits
-
-
Przemek Tredak authored
Signed-off-by:Przemek Tredak <ptredak@nvidia.com>
-
Przemek Tredak authored
Signed-off-by:Przemek Tredak <ptredak@nvidia.com>
-
Phuong Nguyen authored
* fix fsdp Signed-off-by:
Phuong Nguyen <phuonguyen@nvidia.com> --------- Signed-off-by:
Phuong Nguyen <phuonguyen@nvidia.com>
-
Xin Yao authored
* check if the given recipe is supported in fp8_autocast Signed-off-by:
Xin Yao <xiny@nvidia.com> * resolve comments Signed-off-by:
Xin Yao <xiny@nvidia.com> * check only when enabled Signed-off-by:
Xin Yao <xiny@nvidia.com> --------- Signed-off-by:
Xin Yao <xiny@nvidia.com> Co-authored-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com>
-
Tim Moon authored
* Update list of authorized CI users Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Update .github/workflows/trigger-ci.yml Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> --------- Signed-off-by:
Tim Moon <tmoon@nvidia.com> Signed-off-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com> Co-authored-by:
Kirthi Shankar Sivamani <ksivamani@nvidia.com>
-
jberchtold-nvidia authored
Fix flax variables when creating quantizers directly from a recipe Signed-off-by:Jeremy Berchtold <jberchtold@nvidia.com>
-
- 16 Aug, 2025 1 commit
-
-
jomitchellnv authored
fix: fixes multi head attention for context parallel: rotary embedding to use padded cu_seq_lens (#2077) fix: fixes mha to use padded cu_seq_lens during cp Signed-off-by:Jonathan Mitchell <jomitchell@nvidia.com>
-
- 15 Aug, 2025 3 commits
-
-
Jan Bielak authored
* Add `nvte_cublas_gemm_scaled` Signed-off-by:
Jan Bielak <jbielak@nvidia.com> * Support use of `alpha` and `beta` in `tex.generic_gemm` Signed-off-by:
Jan Bielak <jbielak@nvidia.com> * Support use of `alpha` and `beta` in `general_gemm` Signed-off-by:
Jan Bielak <jbielak@nvidia.com> * Support use of `alpha` and `beta` in `BasicLinear._functional_forward` and `BasicLinear._functional_backward` Signed-off-by:
Jan Bielak <jbielak@nvidia.com> * Add `ForwardLinearScaleAdd` fusion Signed-off-by:
Jan Bielak <jbielak@nvidia.com> * Add `BackwardLinearScale` fusion Signed-off-by:
Jan Bielak <jbielak@nvidia.com> * Apply suggestions from code review Signed-off-by:
Jan Bielak <jbielak@nvidia.com> * Remove calls to `validate_gemm_scale` from `BasicLinear` Signed-off-by:
Jan Bielak <jbielak@nvidia.com> --------- Signed-off-by:
Jan Bielak <jbielak@nvidia.com> Co-authored-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com>
-
wenjh authored
-
wenjh authored
Signed-off-by:wenjh <wenjh@sugon.com>
-