"vscode:/vscode.git/clone" did not exist on "71bcaf99e2cb2c677bf3a9addb9e8039cbcab22a"
- 13 Feb, 2026 1 commit
-
-
Teddy Do authored
* [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * adding more stuff missing from cherry picky jeremy PR for inspecting Signed-off-by:
tdophung <tdophung@nvidia.com> * fix some tracing issues when intergating to maxtext Signed-off-by:
tdophung <tdophung@nvidia.com> * Have sort_chunks_by_index handle situations where input buffer is larger than num tokens Signed-off-by:
tdophung <tdophung@nvidia.com> * remove unnecessary assert and comments Signed-off-by:
JAX Toolbox <jax@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * remove Jeremy's PR for inspect ffi Signed-off-by:
JAX Toolbox <jax@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * untouch the amax file, also change comment on te Signed-off-by:
JAX Toolbox <jax@nvidia.com> --------- Signed-off-by:
tdophung <tdophung@nvidia.com> Signed-off-by:
JAX Toolbox <jax@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by:
JAX Toolbox <jax@nvidia.com>
-
- 16 Jan, 2026 1 commit
-
-
Teddy Do authored
* initial impl, not tested Signed-off-by:
tdophung <tdophung@nvidia.com> * consolidate different unpermute primitives with with_pad and with_merging_probs booleans. Implement partitioning for all permutation primitives Signed-off-by:
tdophung <tdophung@nvidia.com> * Add distributed test for non-padding permutation Signed-off-by:
tdophung <tdophung@nvidia.com> * fix issues in distributed test for padding permutation. Make common kernel zero intiialize output permuted scales, permuted probs and output tokens Signed-off-by:
tdophung <tdophung@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * revert zeroing in triton common kernel as it is a race condition. Instead, add extra input (aliased wiuth output) buffer to inner primitive of permutation on jax side to pass in zero intitiated buffers done with jnp zeros Signed-off-by:
tdophung <tdophung@nvidia.com> * fix utils to handle input output aliasing in autotuned kernels Signed-off-by:
tdophung <tdophung@nvidia.com> * Clean up comments, and add more comments explaining input output alias in utils Signed-off-by:
tdophung <tdophung@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix lint and greptile comment Signed-off-by:
tdophung <tdophung@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix issues that lint fixing introduced Signed-off-by:
tdophung <tdophung@nvidia.com> --------- Signed-off-by:
tdophung <tdophung@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 02 Jan, 2026 1 commit
-
-
Kirthi Shankar Sivamani authored
Update copyright to include 2026 Signed-off-by:Kirthi Shankar Sivamani <ksivamani@nvidia.com>
-
- 27 Dec, 2025 1 commit
-
-
xiaoxi-wangfj authored
* [PyTorch] Fuse permute+pad and unpermute+unpad ops for FP8 optimization 1.Fused `moe_permute_with_probs` + `Fp8Padding` and fused `moe_unpermute` + `Fp8Unpadding`, that can remove the explicit padding/unpadding of moe expert, improved performance and reduced peak gpu memory usage. 2.Add tests of fused permute/pad and unpermute/unpad. Signed-off-by:
xiaoxi-wangfj <690912414@qq.com> * [PyTorch/Common] Fuse permute+pad and unpermute+unpad support with_merging_probs Signed-off-by:
xiaoxi-wangfj <690912414@qq.com> * [PyTorch]format code Signed-off-by:
xiaoxi-wangfj <690912414@qq.com> * [Common]perf expert_idx loaded once Signed-off-by:
xiaoxi-wangfj <690912414@qq.com> * fix: pad_offsets can be None Co-authored-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com> Signed-off-by:
xiaoxi-wangfj <690912414@qq.com> * add padding + merging probs bwd support. Not tested Signed-off-by:
tdophung <tdophung@nvidia.com> * Fix garbage initialized act grad Signed-off-by:
tdophung <tdophung@nvidia.com> * all test passing for jax permutation + pad Signed-off-by:
tdophung <tdophung@nvidia.com> * change tokens_per_experts APIs to num_out_tokens with conservative allocation of worst case padding for output buffer Signed-off-by:
tdophung <tdophung@nvidia.com> * change test permutation to reduce test time Signed-off-by:
tdophung <tdophung@nvidia.com> * triggering PR refresh Signed-off-by:
tdophung <tdophung@nvidia.com> * format code Signed-off-by:
tdophung <tdophung@nvidia.com> * Remove some tests cases from pytorch side. Add a separate toekn_dispatch test for sanity in case combine accidentally undo an error on dispatch in the roundtrip test. Add distinction between L0 and L2 in test cases in jax Signed-off-by:
tdophung <tdophung@nvidia.com> * format code Signed-off-by:
tdophung <tdophung@nvidia.com> * remove chance for inefficiency in moving between CPU and GPU, remove redundant primitive using a new static bool for padding, add assert for align size Signed-off-by:
tdophung <tdophung@nvidia.com> * fix lint in jax Signed-off-by:
tdophung <tdophung@nvidia.com> * account for both jax newer and older than version 0.8.2. Adjusted gpu triton binding accordingly Signed-off-by:
tdophung <tdophung@nvidia.com> * format code Signed-off-by:
tdophung <tdophung@nvidia.com> * fix typo Signed-off-by:
tdophung <tdophung@nvidia.com> --------- Signed-off-by:
xiaoxi-wangfj <690912414@qq.com> Signed-off-by:
tdophung <tdophung@nvidia.com> Co-authored-by:
Tim Moon <4406448+timmoon10@users.noreply.github.com> Co-authored-by:
tdophung <tdophung@nvidia.com>
-
- 15 Dec, 2025 1 commit
-
-
Yashaswi Karnati authored
* fix ce loss with ignore idx Signed-off-by:
ykarnati <ykarnati@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci Signed-off-by:
ykarnati <ykarnati@nvidia.com> * remove fix comments Signed-off-by:
ykarnati <ykarnati@nvidia.com> * fallback divisor to 1 Signed-off-by:
ykarnati <ykarnati@nvidia.com> * have arg for n_rows and n_non_ignore Signed-off-by:
ykarnati <ykarnati@nvidia.com> * fuse n_non_ignore to softmax kernel Signed-off-by:
ykarnati <ykarnati@nvidia.com> * fix incorrect arg Signed-off-by:
ykarnati <ykarnati@nvidia.com> --------- Signed-off-by:
ykarnati <ykarnati@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
- 09 Dec, 2025 1 commit
-
-
Teddy Do authored
change order Signed-off-by:tdophung <tdophung@nvidia.com>
-
- 25 Nov, 2025 1 commit
-
-
Teddy Do authored
* Change order of arguments to make jax works Signed-off-by:
tdophung <tdophung@nvidia.com> * make num_experts a tl.constepxr again Signed-off-by:
tdophung <tdophung@nvidia.com> --------- Signed-off-by:
tdophung <tdophung@nvidia.com>
-
- 10 Nov, 2025 1 commit
-
-
Teddy Do authored
* move triton to common and change paths Signed-off-by:
tdophung <tdophung@nvidia.com> * Formatting Signed-off-by:
tdophung <tdophung@nvidia.com> --------- Signed-off-by:
tdophung <tdophung@nvidia.com>
-