[PyTorch] Add ops for MoE grouped MLP (#2664)
* Add ops for MoE grouped MLP Signed-off-by:Tim Moon <tmoon@nvidia.com> * Move testing utility functions to util submodule Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Tweak docs Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Change order of tensor compatibility checks in noop_cat Review suggestion from @ptrendx. Signed-off-by:
Tim Moon <tmoon@nvidia.com> * Add support for GLU interleaving in clamped SwiGLU Signed-off-by:
Tim Moon <tmoon@nvidia.com> --------- Signed-off-by:
Tim Moon <tmoon@nvidia.com>
Showing
Please register or sign in to comment