- 14 Oct, 2025 1 commit
-
-
Xuehai Pan authored
Co-authored-by:LeiWang1999 <leiwang1999@outlook.com>
-
- 13 Oct, 2025 1 commit
-
-
Lei Wang authored
-
- 12 Oct, 2025 1 commit
-
-
Zhengju Tang authored
* [BugFix] Robust gemm policy for sparse_mla_fwd in Hopper and Ada Lovelace architectures * [Lint]
-
- 11 Oct, 2025 2 commits
-
-
Yu Cheng authored
* [Feature][Example] Support TMA reduce operation and update GQA bwd example * move GQA bwd with TMA reduce to new example * [Lint]: [pre-commit.ci] auto fixes [...] --------- Co-authored-by:pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Lei Wang authored
[Refactor] Refactor Pass `InjectFenceProxy` and expose some warp group primitives in frontend (#977) * • InjectFenceProxy docs and tests - annotate proxy fence injector with context comments for async/generic detection - add compiler internals doc covering the pass mechanics and link it in docs index - repair fence proxy test by fixing descriptor init usage and fence counter logic * do not consider call_extern as async. * doc update. * reduce test size for sparse mla
-
- 10 Oct, 2025 3 commits
-
-
Xuehai Pan authored
* chore: misc cleanup * feat: add pre-commit config * chore: update lint dependencies * style: fix lint issues * feat: add pre-commit hooks * fix: fix typos * chore: update .gitattributes * [Lint]: [pre-commit.ci] auto fixes [...] * docs: update CONTRIBUTING.md * chore: update default venv name * chore: revert and exclude CUDA files --------- Co-authored-by:pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
-
Lei Wang authored
* remove debug print * Remove inline let expressions from the LowerAndLegalize function in phase.py * add test * Update sparse MLA examples to support SKV adjustment and correctness checks - Changed SKV parameter from 32768 to 8192 in sparse MLA backward and forward tests. - Added check_correctness parameter to test functions for validation of outputs. - Updated test cases to reflect new SKV values and correctness checks. * reduce test shape * Update documentation structure and refactor main function parameters in example_fusedmoe_tilelang.py - Added a new section for compiler internals in the documentation. - Refactored the main function in example_fusedmoe_tilelang.py to accept parameters for hidden dimensions, expert configurations, and batch/sequence sizes, improving flexibility and readability. * Update buffer access checks in merge_shared_memory_allocations.cc - Changed the condition for buffer access from less than (<) to less than or equal to (<=) to allow access at the same scope level. - Adjusted the logic for determining the access level when touching buffers to ensure correct handling of scope levels. * lint fix * Support pipeline with LetStmt * lint fix * • Fix LowerTileOp let handling to avoid LetInline dependency - inline let-bound BufferLoad nodes via resolver helpers and structured return - remap layouts/buffers using original data vars and only rewrite when needed - update pipeline planner to understand let-bound address_of buffers - document the new inline behaviour in docs/let_inline_fix.md * fix for wgmma pipeline with let binding * lint fix * test fix * reduce smem usage. * let binding enhancement * fix for dpgm * fix simplify * lint fix * use tilelang.Simplify instead of tir.Simplify * • Add TL_FORCE_LET_INLINE pass config and gate eager LetInline usage - register the new config in builtin headers/registration - add helper to pipeline enabling LetInline based on pass context - document LetStmt inlining controls and usage
-
Tong WU authored
* revert split+sum template for MHA backward * lint * Update example_mha_bwd.py * Update example_mha_bwd_wgmma_pipelined.py * Refactor attention sink examples to support bf16 and user-defined softmax scale * fix typos * Adding compile flags for fast math optimizations and enabling BF16 support in both GQA and MHA backward implementations. * Update backward configuration for GQA and MHA examples to align with flash attention * Refactor GQA backward implementation to improve atomic add performance * Allow for slightly larger numerical error for bf16 * upd readme to show bf16 benchmark results * lint * fix ci and lint * fix comments and lint * refactor atomic add --------- Co-authored-by:Lei Wang <34334180+LeiWang1999@users.noreply.github.com>
-
- 09 Oct, 2025 1 commit
-
-
Lei Wang authored
* [Feature] Introduce WGMMA support and enhance GEMM layout handling - Added support for the WGMMA intrinsic in the TileLang framework, enabling efficient matrix multiplication on newer architectures. - Refactored GEMM layout functions to accept a boolean parameter for K dimension handling, improving flexibility in layout generation. - Updated layout inference logic to accommodate new WGMMA configurations and ensure compatibility with existing GEMM operations. - Enhanced Python bindings for layout functions, allowing for better integration and usability in user-defined operations. - Improved documentation for layout functions and GEMM operations to clarify usage and parameters. These changes enhance the performance and usability of GEMM operations, particularly for advanced architectures, while maintaining backward compatibility with existing implementations. * [Refactor] Clean up code formatting and enhance layout function readability - Improved code formatting across multiple files for better readability, including consistent indentation and line breaks. - Updated layout function signatures to enhance clarity, particularly in `gemm_layouts.cc`, `layout.cc`, and `layout.h`. - Refactored lambda functions in `builtin.cc` and `gemm_py.cc` for improved structure and maintainability. - Enhanced comments and documentation in layout-related files to clarify usage and parameters. These changes contribute to a cleaner codebase and improved maintainability of layout functions in the TileLang framework. * [Feature] Add descriptor initialization and offset manipulation for WGMMA - Introduced new TileLang builtins `initialize_descriptor` and `increase_descriptor_offset` to facilitate descriptor management for WGMMA operations. - Updated `builtin.cc` and `builtin.h` to define and document the new builtins, enhancing the framework's capabilities for descriptor handling. - Modified `codegen_cuda.cc` and `ptx.cc` to integrate the new builtins into the code generation process, ensuring proper assembly generation for WGMMA operations. - Enhanced the `GemmWGMMA` class to utilize the new descriptor functionalities, improving the efficiency of matrix multiplication operations. - Updated related tests and documentation to reflect the new features and ensure comprehensive coverage. These changes enhance the TileLang framework's support for advanced matrix operations on newer architectures, improving performance and usability. * [Refactor] Improve code formatting and readability in various files - Enhanced code formatting across multiple files for better readability, including consistent indentation and line breaks. - Updated function signatures and comments in `builtin.h`, `codegen_cuda.cc`, and `ptx.cc` to improve clarity. - Refactored descriptor initialization and offset manipulation functions in `builtin.py` and `wgmma_macro_generator.py` for improved structure. - Cleaned up unnecessary whitespace and improved alignment in `common.h` and `allocate.py`. These changes contribute to a cleaner and more maintainable codebase in the TileLang framework. * [Update] Update subproject commit and refactor layout function call - Updated the subproject commit for `cutlass` to indicate a dirty state. - Refactored the `UpdateAnalyzer` function in `layout.cc` to call `LayoutNode::getVarMap()` instead of `getVarMap()`, improving clarity and ensuring proper context for variable mapping. These changes enhance the maintainability and clarity of the layout handling in the TileLang framework. * support more data types * gemm_rs support * lint fix * wgmma wrapper * Remove debug logging for wgmma assembly code and refactor swizzle byte size calculations in wgmma macro generator. Enhanced handling of leading and stride byte offsets based on swizzle mode, improving clarity and performance in tensor core intrinsic emissions. * Refactor GEMM layout functions to replace 'kfactor' with 'k_inner' for improved clarity and consistency. Update includes necessary changes in error messages for Hopper and Sm100 layouts. Additionally, include a new header for CUTE utilities in common.h. * Comprehensively support WGMMA GEMM SS * remove debug print * lint fix * remove debug print * reduce bwd test shape * lint fix * clear cache for pytest * lint fix * Update sparse MLA examples to support SKV adjustment and correctness checks - Changed SKV parameter from 32768 to 8192 in sparse MLA backward and forward tests. - Added check_correctness parameter to test functions for validation of outputs. - Updated test cases to reflect new SKV values and correctness checks. * test fix * adjust test case * test fix * skip some test currently
-
- 06 Oct, 2025 3 commits
-
-
Cunxiao Ni authored
* [Profiler]Adds CUPTI profiler support * format * rafactor cupti profiler * format * rafactor * rafactor * fix lint * fix lint * refactor * add profiler tests --------- Co-authored-by:LeiWang1999 <leiwang1999@outlook.com>
-
Zhichen Zeng authored
* Add sparse mla bwd example * add bwd into test * Update README with bwd impl * comment * format fix * lint fix * fwd fix --------- Co-authored-by:LeiWang1999 <leiwang1999@outlook.com>
-
Tong WU authored
* revert split+sum template for MHA backward * lint * Update example_mha_bwd.py * Update example_mha_bwd_wgmma_pipelined.py --------- Co-authored-by:Lei Wang <34334180+LeiWang1999@users.noreply.github.com>
-
- 05 Oct, 2025 3 commits
-
-
Lei Wang authored
* tma disable * int64 cast fix.
-
Lei Wang authored
[Example] Introduce split+sum template, and optimize `atomic_add` performance for bwd examples (#940) * example fix * lint fix * bug fix * reduce test size.
-
Cunxiao Ni authored
* [Example] Fix lint to improve grouped GEMM performance with TMA * fix lint
-
- 04 Oct, 2025 3 commits
-
-
Tong WU authored
* [Enhancement] Enhance the GQA backward kernel by calculating `dq` and `dv` via copy&sum * [Example] Implement GQA backward example for Hopper with customized tiling and pipeline * [Example] Add relevant tests * Fix all typos of wrong shape of `V_shared` in macros
-
Lei Wang authored
-
lijinpei authored
* [Example] Optimize online_softmax example - Y should be output in float16. - BN needs to be equal to N to be really online. - On my H100 machine, this increase speedup from 1.424x to 2.788x. * enhance --------- Co-authored-by:LeiWang1999 <leiwang1999@outlook.com>
-
- 02 Oct, 2025 2 commits
-
-
Zhiwen Mo authored
* Implements tcgen05.ld instruction support for copying from shared.tmem to local.fragment on SM100/Blackwell architecture. Adds layout inference and lowering logic for tensor memory operations with proper physical coordinate range analysis and warpgroup alignment checks. Changes: - Add kTMemLoad and kTMemStore to CopyInst enumeration - Implement CheckTMemLoad() and CheckTMemStore() validation functions - Add LowerTmemCopy() to generate tcgen05.ld/st/cp PTX intrinsics - Add tmem layout inference in InferLayout() using expandTcgen05Layout - Support multiple instruction variants (32dp32b/64b/128b/256b) - Add physical layout bounds analysis for tmem coordinates - Change clear_accum from bool to PrimExpr in GEMM operations - Fix std::optional access checks in layout_inference.cc - Add tmem_allocate/deallocate PTX intrinsic support - Fix cooperative_groups grid.sync() code generation * fix * pipeline fix * bug fix * bool fix
-
Lei Wang authored
* [Layout] Add IsCompletedReplicated method and enhance layout inference in ParallelOpNode - Introduced IsCompletedReplicated method in FragmentNode to check if a buffer is fully replicated. - Enhanced InferLayout in ParallelOpNode to handle layout inference for replicated buffers, ensuring only fragment[0] access is allowed. - Updated error handling for non-zero index access in fragment buffers to improve robustness. * [Layout] Improve code formatting and readability in layout.cc and parallel.cc - Enhanced formatting in FragmentNode's IsCompletedReplicated method for better clarity. - Updated InferLayout method in ParallelOpNode to improve code readability by adjusting line breaks and indentation. - Ensured consistent formatting across conditional statements and comments for improved maintainability. * updt * optimize const index related op * bug fix * reduce gdn test * test fix * lintfix * lint fix * test fix
-
- 01 Oct, 2025 2 commits
-
-
Yu Cheng authored
-
Lei Wang authored
* Update requirements and refactor benchmark script for deepseek_nsa example - Updated the requirements.txt to specify a fixed commit for the flash-linear-attention repository. - Refactored import paths in benchmark_nsa_fwd.py for better organization. - Added a new function to generate configurations for autotuning. - Modified the tilelang_sparse_attention function to accept parameters for block size, number of stages, and threads, enhancing flexibility. - Changed allocation of shared memory for accumulators to optimize performance. * Refactor import paths in dequantization examples to use dequantize_utils - Updated import statements in multiple dequantization example scripts to replace references to the removed utils.py file with the new dequantize_utils module. - Ensured consistency across example scripts for better organization and maintainability.
-
- 30 Sep, 2025 2 commits
-
-
botbw authored
* [CI] optimize CI time * [CI] fix transpose && format * [misc] apply coderabbit suggestions && fix typo
-
Lei Wang authored
[Example] Specify a fixed commit for the flash-linear-attention repository and optimize nsa examples (#913) - Updated the requirements.txt to specify a fixed commit for the flash-linear-attention repository. - Refactored import paths in benchmark_nsa_fwd.py for better organization. - Added a new function to generate configurations for autotuning. - Modified the tilelang_sparse_attention function to accept parameters for block size, number of stages, and threads, enhancing flexibility. - Changed allocation of shared memory for accumulators to optimize performance.
-
- 29 Sep, 2025 5 commits
-
-
Lei Wang authored
* Remove unused `fp8_mqa_logits.py` file and update README.md to reflect new directory structure and file descriptions for deepseek_v32 example. Added sections for architecture overview, Lightning Indexer, Top-k Selector, and Sparse MLA Forward implementations. * Update linting configurations and improve code formatting in deepseek_v32 example scripts - Added per-file ignores for the inference directory in `pyproject.toml`. - Refactored code in `topk_selector.py`, `convert.py`, `generate.py`, `kernel.py`, and `model.py` to enhance readability by adjusting spacing and line breaks. - Ensured consistent formatting across function definitions and assertions for better clarity. * Refactor test functions in deepseek_v32 example scripts for improved clarity and consistency - Updated `fp8_lighting_indexer.py` to define a dedicated test function for the lighting indexer. - Refactored `sparse_mla_fwd_pipelined.py` and `sparse_mla_fwd.py` to standardize test function parameters and improve readability. - Enhanced `topk_selector.py` by introducing a test function with parameters for batch size and sequence length. - Ensured all test functions are invoked correctly in the main execution block. * Enhance test functions in deepseek_v32 example scripts with CUDA requirements and parameterization - Added CUDA requirements decorators to `test_example_sparse_mla_fwd` and `test_example_sparse_mla_fwd_pipelined`. - Parameterized test functions to use specific small shapes for testing, improving test coverage and clarity. * lint fix * Update README.md to correct image path for DeepSeek V3.2 architecture diagram
-
Wenxuan Tan authored
* fix flops comp and softmax scale * format
-
Lei Wang authored
* Update README.md to include directory structure and file descriptions for deepseek_v32 example * Refactor and clean up deepseek_v32 example scripts - Removed unused imports and functions from `fp8_mqa_logits.py` to streamline the code. - Improved formatting and readability in `sparse_mla_fwd_pipelined.py` and `sparse_mla_fwd.py` by adjusting function signatures and indentation. - Added `# ruff: noqa` comments to suppress linting warnings in multiple files. - Enhanced the `generate_random_cu_seqlens` function in `utils.py` for better clarity and organization. - Updated print statements for consistency in output formatting.
-
Lei Wang authored
* [Refactor] Enhance CopyNode Lower method to support disable_tma flag and improve flash attention implementation * Updated the CopyNode Lower method to correctly include the disable_tma flag in the GetCopyInst call. * Refactored the flash attention implementation to selectively disable TMA for specific copy operations while allowing it for others. * Addressed linting issues for improved code quality * sparse mla kernels * Remove deprecated sparse MLA and utility files to streamline the codebase.
-
Jiaxing Ding authored
-
- 28 Sep, 2025 2 commits
-
-
Tong WU authored
* Fix CopyNode Lower method to include disable_tma flag in GetCopyInst call * Refactor flash attention implementation to disable TMA for specific copy and allow TMA for other operations * attempt to fix lint
-
Zhiwen Mo authored
* update sm100 related utcmma, tmem, ld/st256 in src * update sm100 related utcmma, tmem, ld/st256 in tilelang * Remove deprecated GEMM examples and related README documentation for SM100 architecture support * Update GEMM implementation to replace UTCMMA with TCGEN5MMA across relevant files * Remove gemm_umma.py example and update README to reflect TCGEN5MMA terminology changes * Update README.md for gemm_sm100 example by removing outdated API sections and streamlining documentation * Update README and source files to reflect TCGEN5.MMA terminology changes * Refactor CUDA GEMM header for improved readability
-
- 26 Sep, 2025 3 commits
-
-
Lei Wang authored
[Layout] Introduce Flexible Parallel to Support T.serial and local buffers inside T.Parallel loop (#844) * Support T.serial and local buffers inside T.Parallel loop. * Fix reducer layout in T.Parallel nested inside other loops * Debug output with LOG(INFO) * Add disable option for WGMMA. * fix * Use DLOG; fix missing registration for new pass config * bug fix * lint fix * Enhance GEMM instruction set with UTCMMA and improve local buffer handling in casting example * Update format.sh shebang, improve logging in layout inference, and enhance buffer store wrapper with detailed comments * Enhance GEMM instantiation logic and improve layout inference for local buffer detection - Updated the GEMM instantiation logic to include a check for WGMMA compatibility, ensuring that the conditions for using WGMMA are more robust. - Refined the layout inference process to better identify when loops manipulate only local buffers, improving the accuracy of thread binding decisions in parallel loops. --------- Co-authored-by:Huanqi Cao <caohuanqi@deepseek.com>
-
Tong WU authored
* Enhance attention sink examples with swizzled layout and performance metrics - Added `make_swizzled_layout` annotations for shared tensors in the `flashattn` function across MHA and GQA examples to optimize memory access patterns. - Updated benchmark outputs to include speedup calculations comparing Triton and TileLang implementations. * Add README for Attention Sink example with algorithm details and benchmark results - Introduced a new README.md file for the Attention Sink example, outlining the forward and backward algorithms, including the computation of `dsinks`. - Provided benchmark results comparing performance metrics of the optimized implementation against Triton, highlighting speedup across various configurations. * Update README.md for Attention Sink example to include link to Triton implementation * Update examples/attention_sink/README.md Co-authored-by:
gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update examples/attention_sink/example_gqa_sink_fwd_bhsd_wgmma_pipelined.py Co-authored-by:
gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * typo --------- Co-authored-by:
gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
-
Tong WU authored
* [Example] Add a new example to support attention sink for MHA - Introduced a new example script for multi-head attention (MHA) with sliding window attention and sink tokens. - Added a reference attention function to validate the implementation against PyTorch. - Included argument parsing for command-line execution of the example. * [Example] Replace MHA sink forward example with updated implementation - Removed the old example script for multi-head attention (MHA) with sliding window attention and sink tokens. - Introduced a new example script that modifies the attention mechanism to enhance performance and maintainability. - Updated argument parsing and reference functions to align with the new implementation. * Enhance MHA sink example with sliding window support - Added a `window_size` parameter to the `flashattn` function to enable sliding window attention. - Implemented assertions to ensure `window_size` is compatible with `block_N`. - Updated the main function to include a `tune` option for performance tuning. - Introduced a new test file to validate both full attention and sliding window scenarios. - Adjusted FLOPS calculation to account for the sliding window configuration. * lint * [Fix] Add checkinf process to fix the bug of swa * Migrate to BSHD layout to align with triton baselines * lint * fix typo * Refactor MHA sink example to use seq_q and seq_kv parameters to accommodate the new sequence length parameters. * Add GQA sink example for optimized attention mechanism & lint fix * fix several typos and bugs * lint * fix speed issues of swa * Add flash attention example with backward pass for BHSD layout and corresponding test cases * Add backward pass implementation for flash attention with sinks and corresponding test case * fix lint and typo * Optimze the calculation of `dsinks` * Add support for swa backward and update examples * fix previous typos * Add example for GQA sink backward pass and update tests for both MHA and GQA sinks * fix lint * fix previous typos * typo
-
- 25 Sep, 2025 1 commit
-
-
Lei Wang authored
* Add atomic operations for CUDA templates in new atomic.h file - Introduced atomic functions including AtomicMax, AtomicMin, AtomicAdd, and their return variants for various data types. - Implemented support for half, bfloat16, and float types with appropriate memory ordering. - Moved atomic-related utilities from common.h to the new atomic.h file for better organization. - Added Python bindings for atomic operations in tilelang, including atomic_max, atomic_min, atomic_add, and their vectorized counterparts. - Updated customize.py to utilize the new atomic functions, enhancing modularity and maintainability. * Refactor atomic operations in CUDA templates for improved readability - Reformatted atomic operation implementations in atomic.h for better code clarity. - Adjusted function signatures in tilelang's atomic.py to enhance readability by aligning parameters. - Cleaned up unnecessary whitespace and comments in customize.py to streamline the codebase. * Add thread storage synchronization configuration option - Introduced a new configuration option `tl.disable_thread_storage_sync` to control the automatic insertion of thread synchronization barriers in shared memory access. - Updated the `ThreadSync` pass to check this configuration and bypass synchronization if disabled. - Enhanced documentation in `builtin.h` and `pass_config.py` to clarify the purpose and usage of the new option. * Refactor thread storage sync configuration retrieval - Simplified the retrieval of the thread storage sync configuration in the `ThreadSync` pass by removing unnecessary intermediate variables. - Ensured that the inclusion of `builtin.h` is consistent by moving it to the appropriate location in the file. * test fix * Update atomic operations and tests for improved functionality - Updated atomic operations in CUDA templates to remove unnecessary address_of calls, enhancing performance and readability. - Refactored atomic operation signatures in tilelang's atomic.py to accept references instead of pointers. - Added new atomic operations and corresponding test cases for atomic add, max, min, and load/store functionalities in the testing suite. - Updated the TVM subproject to the latest commit for better compatibility. * Update attention sink examples to use 32 heads - Modified the `heads` parameter in both `example_gqa_sink_fwd_bhsd_wgmma_pipelined.py` and `example_mha_sink_fwd_bhsd_wgmma_pipelined.py` from 1 to 32 to enhance performance in attention mechanisms. - Ensured consistency across example scripts for improved usability and testing. * Refactor atomic add handling in vectorization - Simplified the extraction of buffer loads for atomic add operations by removing unnecessary address_of calls, improving code clarity and performance. - Updated the data type retrieval for vectorization size calculation to directly access the buffer load node, enhancing efficiency. * Add loop break functionality and enhance thread synchronization - Introduced a new `loop_break` function in `customize.py` to allow breaking out of loops, returning a call to the `tl.loop_break` intrinsic. - Updated the `sync_threads` function in `builtin.py` to accept optional parameters for `barrier_id` and `arrive_count`, improving its flexibility for thread synchronization. - Added necessary imports in `__init__.py` to include the new `loop_break` function for broader accessibility. * test fix
-
- 23 Sep, 2025 3 commits
-
-
Tong WU authored
* [Example] Add a new example to support attention sink for MHA - Introduced a new example script for multi-head attention (MHA) with sliding window attention and sink tokens. - Added a reference attention function to validate the implementation against PyTorch. - Included argument parsing for command-line execution of the example. * [Example] Replace MHA sink forward example with updated implementation - Removed the old example script for multi-head attention (MHA) with sliding window attention and sink tokens. - Introduced a new example script that modifies the attention mechanism to enhance performance and maintainability. - Updated argument parsing and reference functions to align with the new implementation. * Enhance MHA sink example with sliding window support - Added a `window_size` parameter to the `flashattn` function to enable sliding window attention. - Implemented assertions to ensure `window_size` is compatible with `block_N`. - Updated the main function to include a `tune` option for performance tuning. - Introduced a new test file to validate both full attention and sliding window scenarios. - Adjusted FLOPS calculation to account for the sliding window configuration. * lint * [Fix] Add checkinf process to fix the bug of swa * Migrate to BSHD layout to align with triton baselines * lint * fix typo * Refactor MHA sink example to use seq_q and seq_kv parameters to accommodate the new sequence length parameters. * Add GQA sink example for optimized attention mechanism & lint fix * fix several typos and bugs * lint * fix speed issues of swa * Update examples/attention_sink/example_gqa_sink_fwd_bhsd_wgmma_pipelined.py Co-authored-by:
coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update examples/attention_sink/example_mha_sink_fwd_bhsd_wgmma_pipelined.py Co-authored-by:
coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> --------- Co-authored-by:
coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
-
Lei Wang authored
-
Tong WU authored
* fix flash attention examples for `seqlen_q<seqlen_kv` cases * lint
-
- 22 Sep, 2025 2 commits
-
-
Lei Wang authored
* Refactor matmul example to include ReLU activation and update batch size in benchmark script * lint fix * Enhance autotuning capabilities in benchmark script and update argument defaults - Introduced a new `get_configs` function to generate autotuning configurations for the benchmark. - Updated the default batch size and kv context length in the argument parser for improved performance. - Renamed the `--auto_tune` argument to `--autotune` for consistency. - Modified the kernel invocation logic to support autotuning based on the new configurations. * lint fix
-
Lei Wang authored
* Refactor matmul example to include ReLU activation and update batch size in benchmark script * lint fix
-