1. 16 Dec, 2025 1 commit
    • Lei Wang's avatar
      [Refactor] Reduce direct dependency on PyTorch due to its limited type support (#1444) · dda45126
      Lei Wang authored
      
      
      * [Enhancement] Update KernelParam to use tvm.DataType directly and add torch_dtype conversion method
      
      - Changed dtype in KernelParam from torch.dtype to tvm.DataType to support a wider range of data types and prevent information loss during conversions.
      - Added a new method, torch_dtype, to convert tvm.DataType back to torch.dtype for tensor creation.
      - Updated various adapters to utilize the new torch_dtype method for parameter type conversion during initialization.
      
      * [Enhancement] Refactor CUDA type handling and add support for FP4 and FP8 types
      
      - Renamed functions for clarity: GetFP8Type, GetFP6Type, and GetFP4Type are now GetTileLangFP8Type, GetTileLangFP6Type, and GetTileLangFP4Type respectively.
      - Enhanced FP4 type handling to support additional lane sizes (2, 4, 8, 16, 32, 64).
      - Updated CUDA code generation to include new FP8 and FP4 types, ensuring proper type handling in PrintType and related functions.
      - Introduced new structures for FP8 types in cuda_fp8.h to facilitate better memory management and type packing.
      - Added methods in KernelParam and tensor utilities to recognize and handle float4 types, improving compatibility with PyTorch.
      - Enhanced logging for debugging purposes in various CUDA functions to track type handling and memory operations more effectively.
      
      * lint fix
      
      * Remove unnecessary logging statements from CUDA code generation and delete obsolete matrix multiplication test file.
      
      * [Enhancement] Add support for FP4 and FP8 types in CUDA code generation
      
      - Enhanced PrintVecElemLoad and PrintVecElemStore functions to handle new FP4 types.
      - Updated arg_binder to allow float4 to match int8 at runtime, improving compatibility with PyTorch.
      - Modified loop_vectorize to account for buffer dtype lanes in vectorization calculations.
      - Refactored tensor type mapping to support new float4 and float8 types, ensuring correct type handling in tensor operations.
      - Added tests for FP4 and FP8 copy operations to validate functionality and integration with existing workflows.
      
      ---------
      Co-authored-by: default avatarZhiwen Mo <zm125@ic.ac.uk>
      dda45126
  2. 15 Dec, 2025 2 commits
  3. 13 Dec, 2025 2 commits
    • Lei Wang's avatar
      [CUDA] Add read-only parameter annotation for CUDA codegen (#1416) · 00dd7388
      Lei Wang authored
      * [Enhancement] Add read-only parameter annotation for CUDA codegen
      
      * Introduced the `AnnotateReadOnlyParams` transformation to annotate read-only handle parameters in PrimFuncs, enabling the generation of `const` qualifiers in CUDA codegen.
      * Updated `PrintFunctionSignature` and `AddFunction` methods to utilize the new attribute `tl.readonly_param_indices`, enhancing performance by allowing read-only cache loads.
      * Modified the optimization pipeline to include the new annotation step, improving the overall efficiency of the code generation process.
      
      * lint fix
      
      * [Dependency] Update apache-tvm-ffi version to >=0.1.3
      
      * Updated the version of apache-tvm-ffi in pyproject.toml, requirements.txt, and requirements-dev.txt to ensure compatibility with the latest features and fixes.
      * Made adjustments in CUDA and HIP template files to use `const` qualifiers for global pointer parameters, enhancing code safety and clarity.
      
      * lint fix
      
      * [Enhancement] Refactor ReadWriteMarker for improved parameter handling
      
      * Updated the ReadWriteMarker class to accept a set of parameter or data variables, enhancing its ability to track written variables.
      * Introduced a new method, ResolveDataVarFromPtrArg, to resolve underlying buffer data from pointer-like arguments, improving accuracy in identifying written variables.
      * Modified the MarkReadOnlyParams function to gather handle parameters and their corresponding buffer data variables, streamlining the process of determining read-only parameters.
      * Enhanced the logic for identifying written variables to account for aliased data variables, ensuring comprehensive tracking of modifications.
      
      * lint fix
      
      * Update tma_load function to use const qualifier for global memory pointer
      
      * Changed the parameter type of gmem_ptr in the tma_load function from void* to void const* to enhance type safety and clarity in memory operations.
      * This modification ensures that the function correctly handles read-only global memory pointers, aligning with best practices in CUDA programming.
      
      * Remove commented-out code and reorder transformations in OptimizeForTarget function for clarity
      
      * Refactor buffer marking logic in annotate_read_only_params.cc to improve accuracy in identifying written variables. Update OptimizeForTarget function to reorder transformations for better clarity.
      00dd7388
    • Lei Wang's avatar
      [Atomic] Use ptr for atomicAdd dst instead of reference (#1425) · 3546e2ee
      Lei Wang authored
      * [Enhancement] Update AtomicAdd function signature to accept pointer to destination
      
      * Modified AtomicAdd in CUDA to take a pointer instead of a reference for the destination argument.
      * Updated related code in atomicadd_vectorize.cc to ensure compatibility with the new signature.
      * Adjusted Python interface in atomic.py to pass the destination by pointer, aligning with device function requirements.
      
      * [Enhancement] Refactor AtomicAddRet function signature to accept pointer
      
      * Updated AtomicAddRet in both CUDA and HIP to take a pointer instead of a reference for the address argument, improving consistency with the AtomicAdd function.
      * Adjusted the implementation to ensure proper reinterpretation of the address type for atomic operations.
      
      * lint fix
      
      * [Enhancement] Refactor AtomicAddNode::MakeSIMTLoop to use destination pointer
      
      * Updated the MakeSIMTLoop function to build a pointer to the destination element using tvm_access_ptr instead of loading the destination value directly.
      * Simplified the handling of source and destination predicates, improving clarity and maintainability of the code.
      * Ensured compatibility with the new pointer-based approach for atomic operations.
      
      * lint fix
      
      * test fix
      
      * lint fix
      3546e2ee
  4. 01 Dec, 2025 1 commit
    • botbw's avatar
      [Language] support `T.gemm_sp_v2` on sm80 and sm89 (#1056) · 283a9a00
      botbw authored
      * [misc] add a cpp side wrapper for gemm_sp_py
      
      * [misc] typing
      
      * [IR] bind GemmSPWarpPolicy
      
      * [chore] add wrapper code
      
      * [IR] fix GemmSPWarpPolicy
      
      * [codegen] apply ptxas instructions
      
      * [intrinsic] add typical (unused) mma layout
      
      * [template] add uint16 debug func
      
      * [intrinsic] add b matrix layout
      
      * [gemm_sp] enable fp16/bf16 on sm8x
      
      * [layout] refactor fp16/bf16 layout
      
      * [gemm_sp] enable int8
      
      * [chore] update test case dtype
      
      * [gemm_sp] enable fp32
      
      * [layout] refactor layouts
      
      * [intrinsic] enable ldmatrix for mat A
      
      * [layout] enable ldsm for matrix b
      
      * [layout] add ldmatrix for fp32 and fp8
      
      * [chore] refine
      
      * [chore] refactor
      
      * [chore] add fp8 efactor
      
      * [chore] refactor
      
      * [chore] add remove negative zero util
      
      * [example] add a custom compress kernel
      
      * [chore] minor update
      
      * [test] refactor gemm_sp test
      
      * [refactor] make metadata layout func
      
      * [example] add option for using cutlass layout
      
      * [doc] add a gemm_sp doc
      
      * [doc] minor polish
      
      * [chore] remove unused
      
      * [bugfix] fix non replicate b case
      
      * [test] refactor
      
      * [chore] add a check
      
      * [bugfix] fix util bug
      
      * [wip] init a new test case for v2
      
      * [chore] minor refactor
      
      * [chore] minor update
      
      * [bugfix] enable 16bit rs
      
      * [language] enable rs
      
      * [language] enable gemm_sp_sr
      
      * [language] enable gemm_sp_rr
      
      * [test] enable more tests
      
      * [tvm] update ffi binding
      
      * [chore] remove print
      
      * [chore] fix benchmark script
      
      * [lint] precommit lint
      
      * [chore] apply feedback
      
      * [test] use arch 8.0
      
      * [chore] rollback ::ordered_metadata for backward compatibility
      
      * [bugfix] fix captialized
      
      * [example] keep gemm_sp on hopper
      
      * [test] fix no fp8 normal kernel
      
      * [test] reduce matmul size to satisfy accum error
      
      * [test] use cal_diff for assertion
      
      * [bugfix] expand float8 type
      
      * [lib] add make_int4 for short type
      
      * [language] add transpose E
      
      * [bugfix] fix wrong var
      
      * [format] format
      
      * [chore] refactor binding
      
      * [chore] fix wrong passing var
      283a9a00
  5. 26 Nov, 2025 2 commits
  6. 24 Nov, 2025 3 commits
  7. 21 Nov, 2025 2 commits
  8. 20 Nov, 2025 2 commits
  9. 19 Nov, 2025 1 commit
  10. 16 Nov, 2025 1 commit
  11. 15 Nov, 2025 1 commit
    • Gabriel Wu's avatar
      [fix] NVRTC execution backend (#1256) · eb415744
      Gabriel Wu authored
      * [fix] NVRTC execution backend
      
      * [fmt] run pre-commit
      
      * [fix] coderabbit reviews
      
      * [test] add cuda-python to test dep
      
      * [fix] coderabbit reviews
      
      * [fix] CUDA 13 compatibility
      
      * [fix] sm90
      
      * [fix] CUDA 13 compatibility
      
      * [fix] pre-commit
      
      * [fix] always use cuda::std::__atomic_ref_impl
      
      * [fix] restore to external API
      
      * Revert "[fix] restore to external API"
      
      This reverts commit 49bd875638fb631d270015f408991d38fd1e9a5d.
      
      * [fmt] use space instead tabs for py codegen
      
      * [fix] im2col API
      
      * [fix] revert atomic.h
      
      * [fix] dynamic shape
      
      * [refactor] extract common utils
      
      * [feat] support L2 persistent map
      
      * [fix] l2 persistent map
      
      * [fix] pre-commit
      
      * [fix] restore _TYPE_MAP
      
      * [fix] pre-commit
      
      * [fix] avoid duplicate TMA descs
      
      * [docs] add docstring
      
      * [fix] coderabbit
      
      * [fix] coderabbit
      
      * [fix] coderabbit
      
      * [fix] coderabbit
      eb415744
  12. 13 Nov, 2025 1 commit
    • Lei Wang's avatar
      [Bugfix] Fix fp8 dtype for some cases (#1246) · 63bf1609
      Lei Wang authored
      * [Enhancement] Add FP8 support and reproducibility in lighting indexer
      
      * Introduced a manual seed in `test_fp8_lighting_indexer` to ensure reproducible performance.
      * Added specializations for `cute::float_e4m3_t` and `cute::float_e5m2_t` in `gemm_mma.h` for enhanced FP8 support across multiple CUDA architectures, ensuring compatibility and improved functionality.ix
      
      * Fix typos in `fp8_lighting_indexer.py` and improve formatting in `gemm_mma.h`
      
      * Corrected a typo in the comment for `test_fp8_lighting_indexer` to enhance clarity.
      * Reformatted lines in `gemm_mma.h` for better readability by aligning template specializations across multiple CUDA architectures.
      
      * test fix
      
      * bug fix
      63bf1609
  13. 12 Nov, 2025 1 commit
    • Lei Wang's avatar
      [Refactor] Add kernel selection option for GEMM v1 in environment settings (#1200) · 8fbe1b3a
      Lei Wang authored
      * Add kernel selection option for GEMM v1 in environment settings
      
      - Introduced `TILELANG_USE_GEMM_V1` environment variable to control the selection of GEMM version.
      - Added `use_gemm_v1` method in the `Environment` class to determine if GEMM v1 should be used based on the environment variable.
      - Updated GEMM function assignment to default to v2, allowing for v1 to be forced via the new environment variable.
      
      * bug fix
      
      * Add kernel selection option for GEMM in environment settings
      
      - Introduced `TILELANG_USE_GEMM_V1` environment variable to allow users to select between GEMM v1 and v2 implementations.
      - Updated `gemm` function to default to v2 but switch to v1 if the environment variable is set to a truthy value.
      - Added a method `use_gemm_v1` in the `Environment` class to facilitate this selection based on the environment variable.
      
      * Refactor GEMM macro generator to use BufferRegion instead of Buffer
      
      - Updated `wgmma` and `wgmma_rs` methods in `TensorCoreIntrinEmitter` to accept `BufferRegion` parameters instead of `Buffer`.
      - Adjusted related calls in `GemmWGMMA` to ensure compatibility with the new parameter types.
      - Simplified buffer access logic for better clarity and maintainability.
      
      * Refactor GEMM functions to utilize BufferRegion for improved memory handling
      
      - Updated `run_gemm`, `run_gemm_rs`, `run_gemm_sr`, and `run_gemm_rr` functions to set `num_stages` based on block dimensions, enhancing performance for larger matrices.
      - Simplified calls to GEMM functions by removing redundant parameters and ensuring compatibility with BufferRegion.
      - Introduced utility functions for converting between Buffer, BufferLoad, and BufferRegion, improving code clarity and maintainability.
      - Enhanced error handling for full region checks in GEMM operations to ensure correctness in memory access.
      
      * Refactor GEMM code for improved readability and consistency
      
      - Cleaned up formatting and spacing in GEMM-related files for better readability.
      - Standardized comments and code structure across various GEMM functions and macros.
      - Enhanced error messages for clarity in buffer region checks.
      - Removed redundant lines and improved overall code maintainability.
      
      * Update GEMM correctness evaluation and macro generator for improved functionality
      
      - Modified `N_VALUES` in `correctness_evaluation_sm70.py` to include only relevant sizes for tests.
      - Updated test function call in `correctness_evaluation.py` to use `test_gemm_false_true` for better accuracy in testing.
      - Refactored buffer handling in `mma_sm70_macro_generator.py` to improve clarity and consistency in shared buffer access.
      - Enhanced `gemm_mma_sm70.py` to ensure full region checks for input and output buffers, improving correctness in GEMM operations.
      
      * Refactor GEMM and intrinsic files for improved clarity and functionality
      
      - Removed unused variable `A_stride_last` in `mma_sm70_macro_generator.py` to streamline code.
      - Adjusted function signature formatting in `swizzle.py` for better readability.
      - Restored the return of `GemmWGMMA` in `__init__.py` for correct GEMM instantiation.
      - Removed unused variable `B_buf` in `gemm_mma_sm70.py` to enhance code cleanliness.
      - Improved function signature formatting in `language.py` for consistency.
      
      * Enhance GEMM and MMA functionality for FP64 support
      
      - Refactored `GemmNode` to streamline the decision-making process for GEMM instruction selection.
      - Added support for FP64 inputs in the MMA dispatcher, enabling new tensor operations.
      - Introduced a new layout function for FP64 in `mma_layout.py` to facilitate shared memory storage.
      - Updated `TensorCoreIntrinEmitter` to handle FP64 data types, including adjustments for micro tile dimensions and loading mechanisms.
      - Enhanced utility functions to accommodate FP64 index mapping for shared memory operations.
      
      * lint fix
      
      * Refactor GEMM correctness evaluation and shared memory alignment handling
      
      - Reverted the GEMM function call in `correctness_evaluation.py` to the original implementation for consistency.
      - Added a helper function in `merge_shared_memory_allocations.cc` to streamline the marking of shared variables under alignment scope.
      - Enhanced the `VisitExpr_` methods to ensure proper handling of shared memory alignment for `BufferLoadNode` and `VarNode` types.
      - Cleaned up commented-out test code in `correctness_evaluation.py` for better readability.
      
      * Enhance GEMM and MMA implementations with region-based memory handling
      
      - Updated GEMM and MMA classes to utilize BufferRegion for input and output buffers, improving memory management and supporting strided GEMM operations.
      - Added checks to ensure full region compliance for input buffers, enhancing correctness in matrix multiplication.
      - Implemented clear accumulation functionality to reset output buffers before accumulation, ensuring accurate results in GEMM operations.
      
      * Refactor test_tilelang_example_deepseek_v32.py to improve import structure and function calls
      
      - Updated import statements to directly reference modules instead of individual test functions, enhancing clarity.
      - Modified function calls to use the new module structure for better organization and maintainability in testing examples.
      
      * Enhance OnArrayDeclaration method to handle repeated buffer declarations
      
      - Updated the OnArrayDeclaration method to merge metadata for buffers that may appear in multiple Allocate statements, improving robustness against upstream transformations.
      - Added logic to prefer concrete element data types and record extents when previously unknown, enhancing the handling of buffer declarations.
      
      * Add abbreviation for bfloat16 data type in mfma_macro_generator.py
      
      - Introduced a new abbreviation "bf16" for the bfloat16 data type in the mfma_macro_generator.py file, enhancing clarity and consistency in data type representation.
      
      * Refactor CodeGenTileLangHIP to enhance dtype handling and mfma call generation
      
      - Introduced a mapping function to normalize input data types to their corresponding scalar types, improving compatibility with MfmaTraits.
      - Updated the mfma call generation to utilize the new mapping, streamlining the code and enhancing clarity.
      - Removed outdated dtype mapping and replaced it with a more flexible approach to support additional data types like FP8.
      
      * lint fix
      
      * Enhance backend configuration in CMakeLists.txt and improve dtype handling in CodeGenTileLangHIP
      
      - Introduced a macro to define backend options for CUDA, ROCM, and Metal, allowing user overrides and caching of settings.
      - Updated logic to track user-selected backends and conditionally enable defaults based on environment variables.
      - Refactored dtype handling in CodeGenTileLangHIP to streamline mfma call generation and improve clarity.
      - Added support for bfloat16 in the mfma_macro_generator.py, enhancing data type representation consistency.
      
      * Update bfloat16 handling in CodeGenTileLangHIP and mfma_macro_generator.py
      
      - Changed the representation of bfloat16 in CodeGenTileLangHIP from "bfloat16x4" to "bfloat16x4_vec" for improved clarity.
      - Adjusted the mfma_suffix generation in mfma_macro_generator.py to remove the underscore before "bf16", aligning with HIP intrinsic requirements.
      
      * Change logging level from WARNING to DLOG in LegalizeNegativeIndex for non-negative index checks to reduce log verbosity.
      
      * Refactor attention sink examples to simplify index calculations
      
      - Updated index handling in `example_gqa_sink_bwd_bhsd.py` and `example_mha_sink_bwd_bhsd.py` to eliminate unnecessary local allocations and streamline logic for determining start and end indices.
      - Improved readability by using direct calculations instead of local variables for index bounds in pipelined loops.
      
      * Refactor attention sink examples to streamline index calculations
      
      - Simplified index handling in `example_gqa_sink_bwd_bhsd.py`, `example_gqa_sink_fwd_bhsd_wgmma_pipelined.py`, `example_mha_sink_bwd_bhsd.py`, `example_mha_sink_fwd_bhsd_wgmma_pipelined.py`, and `example_mha_sink_fwd_bhsd.py` by removing unnecessary local allocations for start and end indices.
      - Enhanced readability by directly calculating index bounds for pipelined loops, improving overall code clarity.
      
      * lint fix
      
      * bugfix
      
      * Refactor reduce operation handling in CUDA and Python
      
      - Removed outdated shared memory reduction logic from `reduce.cc`.
      - Introduced fragment allocation and improved buffer handling in `reduce.py` to support shared and fragment scopes.
      - Updated CUDA header to define a wider accumulator type for better numerical accuracy.
      - Enhanced error handling for buffer scope validation in the reduction process.
      
      * Fix ReduceOpNode to correctly compute AbsMax by using absolute values of inputs
      
      * Enhance unit loop handling by refining annotation checks
      
      - Updated the condition for identifying effectively empty annotations in unit loops to include cases where only the `pragma_unroll_explicit` hint is present.
      - Introduced a new method, `IsEffectivelyEmptyAnnotation`, to encapsulate this logic, improving code clarity and maintainability.
      
      * clean clode
      8fbe1b3a
  14. 07 Nov, 2025 1 commit
  15. 05 Nov, 2025 1 commit
    • Lei Wang's avatar
      [SM70] Refactor and minor fix for SM70 (#1195) · 4a9cb470
      Lei Wang authored
      * [Feature] Add support for SM70 tensor core MMA instructions
      
      - Introduced new intrinsic `ptx_mma_sm70` for Volta GPUs, enabling m16n16k4 shape with FP16 inputs and FP16/FP32 accumulation.
      - Added `GemmMMASm70` class for handling GEMM operations specific to SM70 architecture.
      - Implemented layout functions for Volta swizzled layouts and updated existing GEMM layout inference logic.
      - Updated `requirements-dev.txt` to include `apache-tvm-ffi` dependency.
      - Added correctness evaluation script for testing GEMM operations on SM70.
      
      * [Refactor] Update formatting and installation commands in scripts
      
      - Modified `format.sh` to install `pre-commit` and `clang-tidy` with the `--user` flag for user-specific installations.
      - Improved readability in `correctness_evaluation_sm70.py` by adjusting the formatting of pytest parameters.
      - Cleaned up spacing and formatting in various C++ source files for better consistency and readability.
      - Removed unnecessary comments and improved layout function definitions in `mma_sm70_layout.py` and `mma_sm70_macro_generator.py` for clarity.
      - Ensured consistent formatting in layout initialization and swizzle functions.
      
      * typo fix
      4a9cb470
  16. 02 Nov, 2025 2 commits
    • Lei Wang's avatar
      [Language] Add Correctness and performance check scripts for V2 (#1174) · d99853b6
      Lei Wang authored
      * fix
      
      * lint fix
      
      * fix
      
      * lint fix
      
      * fix
      
      * upd
      d99853b6
    • Lei Wang's avatar
      [Language] Expose `T.warpgroup_fence_operand` for nvcc code motion (#986) · aef0a6bb
      Lei Wang authored
      
      
      * remove debug print
      
      * pipeline fix
      
      * use the correct buffer access scope
      
      * rs support
      
      * warp warpgroup_fence_operand
      
      * fix
      
      * fp8 dtype ptx enhance
      
      * mma fix
      
      * TCGEN05 Interface
      
      * tcgen05 support
      
      * rebase
      
      * update
      
      * Enhance TCGEN05 support by adding new intrinsic operations and descriptors. Introduced `ptx_tcgen05_mma_ts` for tensor-memory to shared-memory instructions and `tcgen05_mma_arrive` for signaling barrier completion. Updated existing descriptors and code generation logic to accommodate these changes, ensuring compatibility with new instruction sets. Refactored related allocation functions and improved handling of shared memory descriptors.
      
      * lint fix
      
      * Refactor buffer reference handling in CUDA code generation and update test execution in tilelang. Ensure default annotations for unrolling are set correctly in TIR IR module.
      
      * wgmma fix
      
      ---------
      Co-authored-by: default avatarZhiwen Mo <zm125@ic.ac.uk>
      aef0a6bb
  17. 31 Oct, 2025 1 commit
    • Lei Wang's avatar
      [Bugfix] Support 16bits shfl_sync (#1169) · 54d4bd62
      Lei Wang authored
      * Add type-safe warp shuffle helpers for 16-bit float types in common.h
      
      - Introduced generic passthrough functions for warp shuffle operations: `shfl_xor_sync`, `shfl_down_sync`, `shfl_up_sync`, and `shfl_sync`.
      - Added specializations for `cutlass::half_t` and `cutlass::bfloat16_t` to ensure type safety during shuffle operations.
      - Updated `reduce.h` to utilize the new shuffle functions, enhancing code clarity and maintainability.
      
      * lint fix
      54d4bd62
  18. 29 Oct, 2025 1 commit
    • Cunxiao Ni's avatar
      [BugFix] Correct direct copy from bf16 to fp8 (#1090) · e1b12bd0
      Cunxiao Ni authored
      
      
      * [BugFix] Correct direct copy from bf16 to fp8
      
      * fix lint
      
      * implement overloaded cast codegen for type conversion
      
      * fix lint
      
      * remove test
      
      * fix lint
      
      * trigger CI
      
      * Overload fp8 for implicit conversion
      
      * format
      
      * new format
      
      * fix: Reinterpret types to cute types in GEMM
      
      * new format
      
      * fix lint
      
      * new format
      
      * fix lint
      
      * format
      
      * trigger ci
      
      ---------
      Co-authored-by: default avatarnicunxiao <nicunxiao@bytedance.com>
      e1b12bd0
  19. 27 Oct, 2025 3 commits
  20. 25 Oct, 2025 1 commit
  21. 22 Oct, 2025 2 commits
  22. 21 Oct, 2025 1 commit
  23. 20 Oct, 2025 2 commits
  24. 15 Oct, 2025 3 commits
    • alex_xiao's avatar
      fix bug&add amd examples (#966) · 80665cd1
      alex_xiao authored
      
      
      * [Enhancement] Refactor buffer index handling for improved precision and clarity (#668)
      
      - Enhanced buffer index handling to address precision issues by removing redundant operations.
      - Streamlined the logic for determining buffer overlaps, ensuring more accurate conflict detection.
      - Updated related documentation to reflect changes in buffer management practices.
      
      * Remove obsolete test script for AMD example, streamlining the examples directory.
      
      * Remove unused dtype_size variable in AMD example script to streamline code.
      
      * Add input configuration file and update AMD example script for enhanced flexibility
      
      - Introduced a new input.txt file for configurable parameters.
      - Modified the example_amd_flash_attn_fwd.py script to allow for a wider range of configurations, including additional options for num_stages, enable_rasterization, and k_pack.
      - Streamlined the main function for better clarity and organization.
      - Added a new test script to facilitate running the example with specified parameters.
      
      * Remove input configuration file and obsolete test script; enhance AMD example with swizzle layout annotations
      
      - Deleted input.txt and test.sh files as they are no longer needed.
      - Updated example_amd_flash_attn_fwd.py to include swizzle layout annotations for shared memory, improving bank conflict avoidance.
      - Reintroduced swizzle usage in the kernel for better performance.
      
      * Refactor AMD example script for FlashAttention-2
      
      - Updated function names for clarity, changing `get_v2_configs` to `get_configs` and `fast_flashattn_v2` to `fast_flashattn`.
      - Streamlined the main function by renaming `main_v2` to `main` and adjusting the corresponding calls.
      - Removed outdated comments and improved code organization for better readability.
      
      * Refactor formatting in AMD FlashAttention example script
      
      - Improved code readability by adjusting line breaks and indentation in the `fast_flashattn` function.
      - Streamlined the `main` function parameter formatting for consistency.
      - Removed unnecessary blank lines to enhance overall code organization.
      
      * Update example_amd_flash_attn_fwd.py
      
      * Enhance AMD example script and update CI workflows
      
      - Improved the `example_amd_flash_attn_fwd.py` script for better clarity and organization.
      - Added new CI workflows for AMD and documentation publishing.
      - Updated various requirements files to include necessary dependencies.
      - Introduced new test cases and examples for better coverage and functionality.
      - Refactored existing code for improved readability and maintainability.
      
      * Remove redundant tool cache cleanup step in AMD CI workflow
      
      * Remove `torch` dependency from `requirements-rocm.txt` to streamline requirements.
      
      * Add new AMD FlashAttention example and test script
      
      - Introduced `example_amd_flash_attn_bwd.py` for backward attention computation using TileLang.
      - Added `test.sh` script to facilitate running the new example with specified parameters.
      - Enhanced the overall structure and organization of the example for better clarity and usability.
      
      * Update configurations in `example_amd_flash_attn_fwd.py` for autotuner
      
      - Reduced the number of threads and `num_split_q` options for improved performance.
      - Adjusted `panel_size` options to streamline configuration settings.
      
      * Update submodule 'tvm' to commit 6ccc74f622c7ec4ac25d430d0f6546e7b9edb217
      
      * Update submodule 'tvm' to commit 14ff70ab142b9e5a31bbf9c7923c8a697d41e86c
      
      * Add example for AMD Flash Attention backward pass implementation
      
      - Introduced a new example script `example_amd_flash_attn_bwd.py` demonstrating the forward and backward operations of Flash Attention using TileLang.
      - Implemented JIT-compiled functions for both forward and backward passes, including preprocessing and postprocessing steps.
      - Added a main function to facilitate testing and benchmarking of the attention mechanism with configurable parameters.
      - Included reference implementation for validation against PyTorch's attention mechanism.
      
      This addition enhances the examples directory by providing a comprehensive guide for users to understand and utilize Flash Attention in their applications.
      
      * Enhance AMD Flash Attention example with additional testing capabilities
      
      - Updated `example_amd_flash_attn_bwd.py` to include more comprehensive testing features for the Flash Attention implementation.
      - Improved the main function to allow for better parameter configuration and benchmarking.
      - Added validation checks against PyTorch's attention mechanism to ensure accuracy and reliability of the example.
      
      This update aims to provide users with a more robust tool for understanding and utilizing Flash Attention in their applications.
      
      * Update submodule TVM to commit a64a5926a6e59f5417ef2501f9d88b467337cf6a
      
      * Refactor HIP intrinsic rules to CUDA
      
      - Updated file name from `intrin_rule_hip.cc` to `intrin_rule_cuda.cc` to reflect the change in focus from HIP to CUDA intrinsic rules.
      - Adjusted include paths for better organization and clarity in the code structure.
      
      * Update AMD CI workflow to uninstall specific PyTorch packages before installation
      
      - Removed the installation of `flash_attn==2.5.8` to streamline the CI process.
      - Added a step to uninstall `torch`, `torchvision`, and `torchaudio` prior to installing pre-release versions, ensuring compatibility and reducing potential conflicts.
      
      * Remove unused shared memory allocations in AMD Flash Attention backward example
      
      - Eliminated the allocation of shared memory for `dv_shared` and `dk_shared` in `example_amd_flash_attn_bwd.py` to streamline memory usage and improve performance.
      - This change focuses on optimizing the backward pass implementation by reducing unnecessary memory overhead.
      
      * Remove unnecessary pip uninstall command from AMD CI workflow
      
      - Eliminated the step to uninstall `torch`, `torchvision`, and `torchaudio` in the AMD CI workflow, as it is no longer required for the installation of pre-release versions.
      - This change simplifies the CI process and reduces potential overhead during package management.
      
      * Refactor DispatchHIPWarpActiveMask function in HIP intrinsic rules
      
      - Updated the return statement to use std::string for concatenation in the case of 16-bit types, improving code clarity.
      - Added a null check for the CallNode pointer in DispatchHIPWarpActiveMask to enhance robustness and prevent potential dereferencing issues.
      
      * Refactor formatting of HIP intrinsic rule registrations
      
      - Adjusted the formatting of TVM_REGISTER_OP calls for better readability by aligning method chaining.
      - No functional changes were made; this update focuses on code style improvements to enhance maintainability.
      
      * Update file name and documentation for HIP intrinsic rules
      
      - Renamed the file from `intrin_rule_cuda.cc` to `intrin_rule_hip.cc` to accurately reflect the focus on HIP intrinsic rules.
      - Updated the file documentation to clarify its purpose as related to HIP rather than CUDA.
      
      * Enhance DispatchHIPShuffle function with clang-analyzer comments
      
      - Added NOLINTBEGIN and NOLINTEND comments to the DispatchHIPShuffle function to suppress clang-analyzer warnings related to inner pointer usage.
      - This change improves code clarity and maintains compliance with static analysis tools.
      
      * lint fix
      
      * fix
      
      * Enhance autotuner configurations in example_amd_flash_attn_fwd.py by adding new block sizes, stages, and panel sizes. Update test script to use relative Python path and adjust parameters for consistency.
      
      * Add backward attention example to test script
      
      - Extended the test.sh script to include a new backward attention example using example_amd_flash_attn_bwd.py.
      - Added parameters for batch size, context length, and head dimensions to ensure consistency with the forward example.
      - Updated the command for the backward tile example to match the new configuration.
      
      * Refactor FlashAttention implementation in example_amd_flash_attn_bwd.py and example_amd_flash_attn_fwd.py
      
      - Introduced new functions for forward and backward configurations to enhance autotuning capabilities.
      - Updated the FlashAttention forward and backward functions to improve performance and maintainability.
      - Adjusted test script parameters for consistency and clarity, including the addition of group handling.
      - Enhanced the autotuner configurations by refining block sizes and stages for better performance tuning.
      - Updated the main function to reflect changes in parameter names and types for better usability.
      
      * Enhance FlashAttention backward implementation in example_amd_flash_attn_bwd.py
      
      - Updated the backward function to return additional outputs, including log-sum-exp (LSE) values for improved gradient calculations.
      - Refined autotuner configurations by adding new block sizes and adjusting parameters for better performance tuning.
      - Improved shared memory usage in the backward pass to optimize memory access patterns and enhance computational efficiency.
      - Updated the main function to reflect changes in parameter handling and ensure consistency with the forward pass.
      - Enhanced correctness checks in the main function to include LSE validation alongside gradient checks.
      
      * Enhance FlashAttention backward implementation in example_amd_flash_attn_bwd.py
      
      - Introduced a scaling factor for improved numerical stability in gradient calculations.
      - Optimized shared memory usage by adding new shared buffers for intermediate calculations.
      - Refined the handling of tensor fragments to improve performance and maintainability.
      - Updated the main function to ensure compatibility with the new output parameters for backward operations.
      - Removed unnecessary parameters from the test script to streamline execution.
      
      * Refactor FlashAttention implementation in example_amd_flash_attn_bwd.py and example_mha_bwd.py
      
      - Updated the forward and backward functions to improve numerical stability and performance.
      - Enhanced shared memory usage by optimizing buffer allocations and reducing unnecessary parameters.
      - Adjusted autotuner configurations for better performance tuning and compatibility with new output parameters.
      - Added debugging and benchmarking functions for improved correctness verification and performance analysis.
      - Updated the main function to reflect changes in parameter handling and ensure consistency across examples.
      
      * Enhance FlashAttention backward implementation in example_amd_flash_attn_bwd.py
      
      - Updated scaling factor application for improved numerical stability in gradient calculations.
      - Refined tensor handling to ensure consistency with forward pass operations.
      - Optimized atomic operations for writing gradients to dK and dV using fp32 for better precision.
      - Adjusted comments for clarity and alignment with standard implementation practices.
      
      * Expand autotuner configurations in example_amd_flash_attn_bwd.py and update test.sh
      
      - Increased the range of block sizes and stages for forward and backward configurations to enhance performance tuning.
      - Adjusted the test script to include additional parameters for batch size and head dimensions, ensuring consistency with the forward example.
      - Improved comments for clarity and alignment with the updated configurations.
      
      * Enhance performance calculations and benchmarking in example_amd_flash_attn_bwd.py
      
      - Updated FLOPs calculation to account for both forward and backward passes, clarifying the total computational cost.
      - Modified benchmarking functions to evaluate the complete forward and backward performance of both reference and Tile-lang implementations.
      - Improved comments for better understanding of the performance metrics and implementation details.
      - Removed unnecessary parameter from test.sh to streamline execution.
      
      * Remove forward attention test commands from test.sh and retain backward attention execution for streamlined testing.
      
      * Refactor FlashAttention forward and backward implementations in example_amd_flash_attn_bwd.py and example_amd_flash_attn_fwd.py
      
      - Updated the forward function to return both output and log-sum-exp (LSE) values for improved gradient calculations.
      - Enhanced autotuner configurations for forward pass, including new parameters for better performance tuning.
      - Refined scaling factor calculations for numerical stability in both forward and backward passes.
      - Improved comments and documentation for clarity and consistency across implementations.
      - Adjusted main function to reflect changes in parameter handling and ensure compatibility with new output requirements.
      
      * Refactor FlashAttention implementation in example_amd_flash_attn_bwd.py
      
      - Removed outdated comments and improved clarity in the code.
      - Enhanced the forward function to consistently return output and log-sum-exp (LSE) values.
      - Updated autotuner configurations to include new parameters for better performance tuning.
      - Refined tensor handling and scaling factor calculations for improved numerical stability.
      - Adjusted the main function to ensure compatibility with updated output requirements and parameter handling.
      
      * Enhance FlashAttention backward implementation in example_amd_flash_attn_bwd.py
      
      - Updated configuration parameters for backward calculations, including new options for block sizes, threads, and rasterization.
      - Added new parameters (k_pack, qk_coalesced_width, v_coalesced_width) to improve performance tuning and memory access patterns.
      - Modified tensor copy operations to utilize coalesced widths for optimized memory loads.
      - Enhanced GEMM operations with k_pack for improved computational efficiency.
      - Refined the configuration generation logic to accommodate the new parameters, ensuring comprehensive coverage for backward pass scenarios.
      
      * Refactor configuration and tensor operations in example_amd_flash_attn_bwd.py
      
      - Updated backward configuration parameters to include larger block sizes and a wider range of threads for enhanced performance tuning.
      - Removed unnecessary parameters (k_pack, qk_coalesced_width, v_coalesced_width) from function signatures and tensor operations to simplify the implementation.
      - Optimized tensor copy operations by eliminating coalesced width specifications, streamlining memory access patterns.
      - Adjusted GEMM operations to improve computational efficiency without the use of k_pack.
      
      * Enhance HIP code generation and FP8 type support
      
      - Added support for additional FP8 types (e4m3, e4m3b11fnuz, e5m2fnuz, e8m0) in codegen_hip.cc to improve compatibility.
      - Updated error logging to include unsupported FP8 type details for better debugging.
      - Implemented handling for loop break and no-op register management in HIP within VisitExpr_ method.
      - Introduced new FP8 vector types (e5 and e8) in hip_fp8.h for enhanced functionality.
      - Added overloads for AtomicAdd in common.h to support both pointer and value arguments.
      
      * Enhance FP8 type support and clarify accumulator handling in HIP
      
      - Expanded FP8 type support in codegen_hip.cc to include additional float8 formats.
      - Updated gemm.h to clarify the handling of the accumulator when clear_accum is true.
      - Added comments in hip_fp8.h to indicate that E8M0 types are not supported in the current HIP version.
      
      * Remove deprecated files and update print statements for clarity in example_amd_flash_attn_bwd.py
      
      * Update print statement formatting for clarity in example_amd_flash_attn_bwd.py
      
      * Remove redundant verification results summary print statement in example_amd_flash_attn_bwd.py for cleaner output.
      
      * Fix formatting inconsistencies in example_amd_flash_attn_bwd.py and example_amd_flash_attn_fwd.py by adding spaces for improved readability in configuration parameters and print statements.
      
      * Refactor and enhance HIP code generation for improved FP8 support
      
      - Reorganized and cleaned up code in codegen_hip.cc for better readability and maintainability.
      - Enhanced handling of FP8 types, including additional formats and improved error logging for unsupported types.
      - Updated AtomicAdd function in common.h to streamline its implementation.
      - Refined the PrintVecElemLoadExpr method to handle volatile loads more effectively.
      - Added function to manage the addition of new functions in the code generation process.
      
      * Fix formatting issue in HIP code generation for MFMA call
      
      - Adjusted the indentation of the MFMA call code block in codegen_hip.cc for improved readability and consistency.
      
      * Refactor HIP code generation and enhance FP8 type handling
      
      - Reintroduced necessary includes and reorganized code in codegen_hip.cc for improved structure and readability.
      - Enhanced the GetFP8Type function to support additional FP8 formats and improved error handling for unsupported types.
      - Updated PrintType and PrintVecElemLoadExpr methods to better manage type conversions and vector element loading.
      - Refined the AddFunction method to streamline function addition in the code generation process.
      
      * Remove unnecessary blank line in example_amd_flash_attn_bwd.py for improved code cleanliness.
      
      * Refactor backward attention implementation in example_amd_flash_attn_bwd.py
      
      - Updated the GEMM operation to use shared memory for improved performance.
      - Adjusted parallelization parameters to enhance efficiency in the backward pass.
      
      * Fix formatting by removing an unnecessary blank line in example_amd_flash_attn_bwd.py for improved code cleanliness.
      
      * Add additional test cases for `assert_tl_matmul_correctness` with `float8_e4m3fnuz` and various configurations
      
      * Refactor test case formatting for `assert_tl_matmul_correctness` in `test_tilelang_gemm_mfma_intrinsic.py`
      
      ---------
      Co-authored-by: default avatarxinxyxiao <xinyxiao@amd.com>
      Co-authored-by: default avatarLei Wang <34334180+LeiWang1999@users.noreply.github.com>
      Co-authored-by: default avatarLeiWang1999 <leiwang1999@outlook.com>
      80665cd1
    • Lei Wang's avatar
      [Language] Expose `T.get_warp_idx_sync` and `T.shuffle_elect` for efficient thread election (#989) · b78d8404
      Lei Wang authored
      
      
      * Expose CUDA warp/lane intrinsics in TileLang frontend
      
      * generalize warp indexing intrinsics and add coverage
      
      * [Lint]: [pre-commit.ci] auto fixes [...]
      
      ---------
      Co-authored-by: default avatarpre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
      b78d8404
    • LJC00118's avatar
      [CUDA] Add pack functions for FP8 types (#967) · 32ddc1ac
      LJC00118 authored
      * Remove an incorrect check
      
      * add fp8 pack function
      
      * code lint
      
      * minor fix
      
      * minor fix
      
      * minor fix
      
      * Minor fix
      
      * Minor fix
      32ddc1ac
  25. 14 Oct, 2025 1 commit
  26. 11 Oct, 2025 1 commit