1. 12 Mar, 2025 3 commits
    • penguin_wwy's avatar
      efb2b1d5
    • Lei Wang's avatar
      [Enhancement] Simplify GEMM example with direct kernel compilation (#191) · 79ea77e8
      Lei Wang authored
      * Optimize CMake build process with dynamic job count calculation
      
      - Modify build_csrc function to use 90% of available CPU cores
      - Ensure at least one job is used during compilation
      - Improve build performance by dynamically adjusting parallel job count
      
      * Optimize build_csrc function with multiprocessing module
      
      - Replace os.cpu_count() with multiprocessing.cpu_count()
      - Maintain existing 90% CPU utilization logic
      - Improve CPU core count calculation for build process
      
      * Add dynamic shape support with out_idx in Cython JIT kernel compilation
      
      - Implement `run_cython_dynamic_shape_with_out_idx` function in test_tilelang_jit_gemm_cython.py
      - Update Cython wrapper to handle dynamic symbolic shapes during tensor allocation
      - Add support for resolving dynamic shape dimensions using input tensor references
      - Enhance flexibility of JIT kernel compilation with symbolic shape handling
      
      * Enhance error reporting for dynamic symbolic shape resolution in Cython JIT kernel
      
      - Add detailed error message when a dynamic symbolic dimension is not found in dynamic_symbolic_map
      - Improve debugging by providing context about missing symbolic dimensions
      - Maintain existing dynamic shape resolution logic
      
      * Fix Copy operation handling for scalar and multi-dimensional tensors
      
      - Add special handling for scalar tensor copy operations
      - Enhance error reporting in MakeIndices method with more detailed diagnostic information
      - Improve SIMT loop generation to support zero-dimensional tensors
      - Add explicit check and handling for scalar tensor scenarios
      
      * Refactor Copy operation code formatting and improve readability
      
      - Improve code formatting in MakeIndices and MakeSIMTLoop methods
      - Add line breaks to enhance readability of complex ICHECK statements
      - Simplify code structure in scalar tensor handling
      - Remove unnecessary whitespace and improve code alignment
      
      * Simplify GEMM example with direct kernel compilation
      
      - Update copyright header to Tile-AI Corporation
      - Remove Profiler import and usage
      - Replace tilelang.lower() with tilelang.compile()
      - Simplify kernel execution workflow
      - Update kernel source retrieval method
      79ea77e8
    • Lei Wang's avatar
      [Bugfix] Fix `T.copy` for scalar datatypes (#190) · 454248c7
      Lei Wang authored
      * Optimize CMake build process with dynamic job count calculation
      
      - Modify build_csrc function to use 90% of available CPU cores
      - Ensure at least one job is used during compilation
      - Improve build performance by dynamically adjusting parallel job count
      
      * Optimize build_csrc function with multiprocessing module
      
      - Replace os.cpu_count() with multiprocessing.cpu_count()
      - Maintain existing 90% CPU utilization logic
      - Improve CPU core count calculation for build process
      
      * Add dynamic shape support with out_idx in Cython JIT kernel compilation
      
      - Implement `run_cython_dynamic_shape_with_out_idx` function in test_tilelang_jit_gemm_cython.py
      - Update Cython wrapper to handle dynamic symbolic shapes during tensor allocation
      - Add support for resolving dynamic shape dimensions using input tensor references
      - Enhance flexibility of JIT kernel compilation with symbolic shape handling
      
      * Enhance error reporting for dynamic symbolic shape resolution in Cython JIT kernel
      
      - Add detailed error message when a dynamic symbolic dimension is not found in dynamic_symbolic_map
      - Improve debugging by providing context about missing symbolic dimensions
      - Maintain existing dynamic shape resolution logic
      
      * Fix Copy operation handling for scalar and multi-dimensional tensors
      
      - Add special handling for scalar tensor copy operations
      - Enhance error reporting in MakeIndices method with more detailed diagnostic information
      - Improve SIMT loop generation to support zero-dimensional tensors
      - Add explicit check and handling for scalar tensor scenarios
      
      * Refactor Copy operation code formatting and improve readability
      
      - Improve code formatting in MakeIndices and MakeSIMTLoop methods
      - Add line breaks to enhance readability of complex ICHECK statements
      - Simplify code structure in scalar tensor handling
      - Remove unnecessary whitespace and improve code alignment
      454248c7
  2. 11 Mar, 2025 4 commits
    • penguin_wwy's avatar
    • Yu Cheng's avatar
      [Dev][Bugfix] Add RMS Normalization Kernels and Fix Reduce Bug (#188) · fe0de672
      Yu Cheng authored
      * [Dev][Bugfix] Add RMS Normalization Kernels and Fix Reduce Bug
      
      - Implement two RMS normalization implementations in TileLang:
        * `rms_norm_splitk`: Split-K reduction approach for large matrices
        * `rms_norm`: Full reduction kernel with simplified implementation
      - Add reference implementation using PyTorch for validation
      - Include performance benchmarking for both kernel variants
      - Demonstrate flexible block size and matrix size configurations
      
      * [Examples] Simplify RMS Normalization Kernel Compilation
      
      - Remove commented-out code for split-K RMS normalization
      - Simplify kernel compilation by removing explicit TMA lowering configuration
      - Update copyright header to Tile-AI Corporation
      - Streamline main script for RMS normalization example
      fe0de672
    • Lei Wang's avatar
      [Bugfix] Add dynamic shape support with out_idx in Cython JIT kernel compilation (#185) · d34601ab
      Lei Wang authored
      * Optimize CMake build process with dynamic job count calculation
      
      - Modify build_csrc function to use 90% of available CPU cores
      - Ensure at least one job is used during compilation
      - Improve build performance by dynamically adjusting parallel job count
      
      * Optimize build_csrc function with multiprocessing module
      
      - Replace os.cpu_count() with multiprocessing.cpu_count()
      - Maintain existing 90% CPU utilization logic
      - Improve CPU core count calculation for build process
      
      * Add dynamic shape support with out_idx in Cython JIT kernel compilation
      
      - Implement `run_cython_dynamic_shape_with_out_idx` function in test_tilelang_jit_gemm_cython.py
      - Update Cython wrapper to handle dynamic symbolic shapes during tensor allocation
      - Add support for resolving dynamic shape dimensions using input tensor references
      - Enhance flexibility of JIT kernel compilation with symbolic shape handling
      
      * Enhance error reporting for dynamic symbolic shape resolution in Cython JIT kernel
      
      - Add detailed error message when a dynamic symbolic dimension is not found in dynamic_symbolic_map
      - Improve debugging by providing context about missing symbolic dimensions
      - Maintain existing dynamic shape resolution logic
      d34601ab
    • Lei Wang's avatar
      [Enhancement] Optimize CMake build process with dynamic job count calculation (#183) · c2192780
      Lei Wang authored
      * Optimize CMake build process with dynamic job count calculation
      
      - Modify build_csrc function to use 90% of available CPU cores
      - Ensure at least one job is used during compilation
      - Improve build performance by dynamically adjusting parallel job count
      
      * Optimize build_csrc function with multiprocessing module
      
      - Replace os.cpu_count() with multiprocessing.cpu_count()
      - Maintain existing 90% CPU utilization logic
      - Improve CPU core count calculation for build process
      c2192780
  3. 10 Mar, 2025 3 commits
    • Lei Wang's avatar
      [Examples] Implement NSA Backward kernels (#180) · 6891d3ec
      Lei Wang authored
      
      * Update native sparse attention example with scale parameter handling
      
      - Add scale parameter processing in native_sparse_attention function
      - Modify example script to include custom scale value
      - Update function calls to pass scale parameter
      - Enhance flexibility of sparse attention implementation
      
      * Refactor Triton Native Sparse Attention Example
      
      - Improve code formatting and readability in example_triton_nsa_bwd.py
      - Standardize function and parameter alignment
      - Remove unnecessary whitespaces and optimize imports
      - Enhance code style consistency with previous commits
      6891d3ec
    • Lei Wang's avatar
      [Bugfix] Improve Thread Variable Handling in Layout Inference (#179) · c39e540a
      Lei Wang authored
      * [Refactor] Improve Thread Variable Handling in Layout Inference
      
      - Update layout inference to handle thread variables more robustly
      - Add explicit size check between infer_list_ and thread_var_vec_
      - Modify thread variable access to use per-iteration thread variable
      - Simplify thread predicate retrieval logic
      - Add minor code cleanup and return variable assignment
      
      * [Refactor] Update Layout Inference Copyright and Simplify Return Logic
      
      - Replace Apache License header with Microsoft Corporation copyright notice
      - Simplify LayoutInference function by directly returning substituted function
      - Remove unnecessary variable assignment in return statement
      
      * [Refactor] Update Layout Inference Copyright to Tile-AI Corporation
      
      - Change copyright notice from Microsoft Corporation to Tile-AI Corporation
      - Maintain existing file structure and licensing header
      c39e540a
    • Lei Wang's avatar
      [Refactor] Enhance GPU Kernel Launch with Environment Thread Creation (#178) · 8ccf6ea2
      Lei Wang authored
      - Introduce `CreateEnvThread` function to generate environment threads for GPU kernel launches
      - Modify `KernelLaunch` to use `CreateEnvThread` for block and thread indices
      - Improve thread variable naming with shorter, more descriptive identifiers (bx, by, bz, tx, ty, tz)
      - Ensure proper thread environment setup within PrimFunc context
      8ccf6ea2
  4. 09 Mar, 2025 4 commits
    • Lei Wang's avatar
      [Feat] Introduce new caching mechanism for compiled kernels (#176) · 7bde63d5
      Lei Wang authored
      * Add kernel caching mechanism to TileLang
      
      - Implement a new `cached` function in `tilelang/cache/__init__.py` to cache and reuse compiled kernels
      - Expose the `cached` function in the main `tilelang/__init__.py`
      - Add a test case for cached matrix multiplication in `testing/python/cache/test_tilelang_cache_matmul.py`
      - Provide a `clear_cache()` function to reset the kernel cache when needed
      
      * Refactor kernel caching test and implementation
      
      - Simplify the `cached` function in `tilelang/cache/__init__.py`
      - Update test script `test_tilelang_cache_matmul.py` to use `tilelang.testing.main()`
      - Remove unnecessary whitespace and improve code formatting
      
      * Update import for `cached` function in MHA examples
      
      - Modify import statement in `example_mha_bwd.py` and `test_tilelang_kernel_mha_bwd.py`
      - Change import from `tilelang.profiler import cached` to `tilelang import cached`
      - Align with recent refactoring of kernel caching mechanism
      
      * Refactor `cached` function signature in kernel caching
      
      - Update function signature to use keyword-only arguments for `target` and `target_host`
      - Improve parameter order and readability of the `cached` decorator
      - Maintain existing functionality while enhancing function definition
      7bde63d5
    • Lei Wang's avatar
      [Feat] Append Pass Context and TMA lowering configuration option (#175) · fb6b101c
      Lei Wang authored
      * Add TMA lowering configuration option and update copyright notices
      
      This commit introduces a new configuration option to disable TMA (Tensor Memory Access) lowering and updates copyright notices across multiple files. Key changes include:
      
      - Add `kDisableTMALower` configuration option in builtin.h and builtin.cc
      - Update copyright notices from Microsoft Corporation to Tile-AI Corporation
      - Modify `LowerArgs` struct to include `disable_tma_lower` flag
      - Update JIT compilation interfaces to support pass configuration
      - Enhance error reporting in bulk copy lowering
      - Propagate pass configuration through various adapter layers
      
      * lint fix
      fb6b101c
    • Lei Wang's avatar
      [AutoTune] Enable config-performance trace (#174) · e6f77253
      Lei Wang authored
      * Improve Autotuner and CUDA Compatibility for Tensor Core Policies
      
      - Enhance autotuner with robust parallel compilation and error handling
      - Add logging for better debugging during configuration compilation
      - Support SM90 compute capabilities in TensorCore and matmul analysis policies
      - Improve future handling and result tracking in autotuner
      - Add more flexible SM version checks for pipeline and async copy stages
      
      * Refactor Autotuner Parallel Compilation with Improved Error Handling
      
      - Enhance tqdm progress bar formatting for concurrent configuration compilation
      - Simplify exception handling in parallel compilation process
      - Remove unnecessary logging and improve code readability
      - Optimize thread pool shutdown and result processing
      e6f77253
    • Lei Wang's avatar
      [Bugfix] Implement boundary check for the buffer shape with dynamic symbolic (#173) · 8344af52
      Lei Wang authored
      * [Refactor] Update BitBLAS Benchmark with TileLang Carver Imports and Roller Hints Generation
      
      - Replace BitBLAS imports with TileLang Carver imports in benchmark_matmul.py
      - Modify roller hints generation using new TileLang Carver template and utility functions
      - Update get_roller_hints_from_func to handle None cases and improve return logic
      - Adjust DefaultPolicy to handle different codegen dictionary formats
      
      * [Refactor] Update Thread Binding and Import Statements in TileLang Kernels
      
      - Replace T.thread_binding() with T.get_thread_binding() across multiple kernel test files
      - Update import statements for MMA layout and macro generator in dequantize GEMM and FP8 examples
      - Move map_torch_type utility function to tilelang.utils.tensor
      - Remove unnecessary imports and improve code organization
      
      * Refactor Native Sparse Attention Example with Enhanced Triton Kernel
      
      - Update parallel_nsa_fwd_kernel to support more flexible sparse attention computation
      - Add support for block counts and offsets in the Triton kernel
      - Modify kernel grid and computation logic for improved performance
      - Update example script to use naive_nsa_simple reference implementation
      - Improve type hints and kernel configuration
      
      * Add Native Sparse Attention Examples with Tilelang and Triton Implementations
      
      - Introduce new example scripts for native sparse attention:
        * example_tilelang_nsa_fwd.py: Forward pass implementation using TileLang
        * example_tilelang_nsa_decode.py: Decoding-specific sparse attention implementation
        * example_triton_nsa_fwd.py: Triton-based sparse attention forward pass
      - Update reference.py with naive implementations for sparse attention
      - Support different sparse attention scenarios including forward pass and inference
      - Add comprehensive testing and validation against reference implementations
      
      * lint fix
      
      * Add Variable-Length Native Sparse Attention Examples for TileLang and Triton
      
      - Introduce new example scripts for variable-length native sparse attention:
        * example_tilelang_nsa_fwd_varlen.py: TileLang implementation with variable sequence lengths
        * example_triton_nsa_fwd_varlen.py: Triton implementation with variable sequence lengths
      - Update reference.py to support variable-length sparse attention scenarios
      - Enhance existing sparse attention implementations to handle variable-length inputs
      - Add comprehensive testing and validation for variable-length sparse attention
      
      * Refactor Native Sparse Attention Examples: Code Style and Formatting Improvements
      
      - Standardize function and parameter formatting across NSA example files
      - Improve code readability by adjusting indentation and line breaks
      - Enhance type hints and parameter alignment
      - Remove unnecessary whitespaces and optimize imports
      - Maintain consistent code style across TileLang and Triton implementations
      
      * Add debug logging and extend execution backend in JIT and loop vectorization
      
      - Add detailed logging in loop vectorization to help diagnose buffer shape handling
      - Extend JIT execution backend to include 'cython' option
      - Improve boundary condition checks in BufferLoadNode visit method
      
      * Remove debug logging in loop vectorization BufferLoadNode visit method
      
      - Remove unnecessary INFO log statements in VisitExpr_ method
      - Simplify code by eliminating redundant logging
      - Maintain core logic for handling buffer load node visits
      8344af52
  5. 07 Mar, 2025 7 commits
    • Lei Wang's avatar
      [Example] Implement tilelang native sparse attention varlen example (#170) · 8e1845d2
      Lei Wang authored
      * [Refactor] Update BitBLAS Benchmark with TileLang Carver Imports and Roller Hints Generation
      
      - Replace BitBLAS imports with TileLang Carver imports in benchmark_matmul.py
      - Modify roller hints generation using new TileLang Carver template and utility functions
      - Update get_roller_hints_from_func to handle None cases and improve return logic
      - Adjust DefaultPolicy to handle different codegen dictionary formats
      
      * [Refactor] Update Thread Binding and Import Statements in TileLang Kernels
      
      - Replace T.thread_binding() with T.get_thread_binding() across multiple kernel test files
      - Update import statements for MMA layout and macro generator in dequantize GEMM and FP8 examples
      - Move map_torch_type utility function to tilelang.utils.tensor
      - Remove unnecessary imports and improve code organization
      
      * Refactor Native Sparse Attention Example with Enhanced Triton Kernel
      
      - Update parallel_nsa_fwd_kernel to support more flexible sparse attention computation
      - Add support for block counts and offsets in the Triton kernel
      - Modify kernel grid and computation logic for improved performance
      - Update example script to use naive_nsa_simple reference implementation
      - Improve type hints and kernel configuration
      
      * Add Native Sparse Attention Examples with Tilelang and Triton Implementations
      
      - Introduce new example scripts for native sparse attention:
        * example_tilelang_nsa_fwd.py: Forward pass implementation using TileLang
        * example_tilelang_nsa_decode.py: Decoding-specific sparse attention implementation
        * example_triton_nsa_fwd.py: Triton-based sparse attention forward pass
      - Update reference.py with naive implementations for sparse attention
      - Support different sparse attention scenarios including forward pass and inference
      - Add comprehensive testing and validation against reference implementations
      
      * lint fix
      
      * Add Variable-Length Native Sparse Attention Examples for TileLang and Triton
      
      - Introduce new example scripts for variable-length native sparse attention:
        * example_tilelang_nsa_fwd_varlen.py: TileLang implementation with variable sequence lengths
        * example_triton_nsa_fwd_varlen.py: Triton implementation with variable sequence lengths
      - Update reference.py to support variable-length sparse attention scenarios
      - Enhance existing sparse attention implementations to handle variable-length inputs
      - Add comprehensive testing and validation for variable-length sparse attention
      
      * Refactor Native Sparse Attention Examples: Code Style and Formatting Improvements
      
      - Standardize function and parameter formatting across NSA example files
      - Improve code readability by adjusting indentation and line breaks
      - Enhance type hints and parameter alignment
      - Remove unnecessary whitespaces and optimize imports
      - Maintain consistent code style across TileLang and Triton implementations
      8e1845d2
    • You Jiacheng's avatar
      [Dev] Use SS-GEMM for PV in mla (#165) · 166a9585
      You Jiacheng authored
      It's slightly faster than T.copy then RS-GEMM, and simpler.
      166a9585
    • Lei Wang's avatar
    • Lei Wang's avatar
      [Example] Implement NSA Decode tilelang exampls (#168) · 69f35439
      Lei Wang authored
      * [Refactor] Update BitBLAS Benchmark with TileLang Carver Imports and Roller Hints Generation
      
      - Replace BitBLAS imports with TileLang Carver imports in benchmark_matmul.py
      - Modify roller hints generation using new TileLang Carver template and utility functions
      - Update get_roller_hints_from_func to handle None cases and improve return logic
      - Adjust DefaultPolicy to handle different codegen dictionary formats
      
      * [Refactor] Update Thread Binding and Import Statements in TileLang Kernels
      
      - Replace T.thread_binding() with T.get_thread_binding() across multiple kernel test files
      - Update import statements for MMA layout and macro generator in dequantize GEMM and FP8 examples
      - Move map_torch_type utility function to tilelang.utils.tensor
      - Remove unnecessary imports and improve code organization
      
      * Refactor Native Sparse Attention Example with Enhanced Triton Kernel
      
      - Update parallel_nsa_fwd_kernel to support more flexible sparse attention computation
      - Add support for block counts and offsets in the Triton kernel
      - Modify kernel grid and computation logic for improved performance
      - Update example script to use naive_nsa_simple reference implementation
      - Improve type hints and kernel configuration
      
      * Add Native Sparse Attention Examples with Tilelang and Triton Implementations
      
      - Introduce new example scripts for native sparse attention:
        * example_tilelang_nsa_fwd.py: Forward pass implementation using TileLang
        * example_tilelang_nsa_decode.py: Decoding-specific sparse attention implementation
        * example_triton_nsa_fwd.py: Triton-based sparse attention forward pass
      - Update reference.py with naive implementations for sparse attention
      - Support different sparse attention scenarios including forward pass and inference
      - Add comprehensive testing and validation against reference implementations
      
      * lint fix
      69f35439
    • Lei Wang's avatar
      [Bugfix] Cast bool dtype into int8 in blocksparse examples (#167) · b6c48453
      Lei Wang authored
      * [Refactor] Update BitBLAS Benchmark with TileLang Carver Imports and Roller Hints Generation
      
      - Replace BitBLAS imports with TileLang Carver imports in benchmark_matmul.py
      - Modify roller hints generation using new TileLang Carver template and utility functions
      - Update get_roller_hints_from_func to handle None cases and improve return logic
      - Adjust DefaultPolicy to handle different codegen dictionary formats
      
      * [Refactor] Update Thread Binding and Import Statements in TileLang Kernels
      
      - Replace T.thread_binding() with T.get_thread_binding() across multiple kernel test files
      - Update import statements for MMA layout and macro generator in dequantize GEMM and FP8 examples
      - Move map_torch_type utility function to tilelang.utils.tensor
      - Remove unnecessary imports and improve code organization
      
      * Refactor Native Sparse Attention Example with Enhanced Triton Kernel
      
      - Update parallel_nsa_fwd_kernel to support more flexible sparse attention computation
      - Add support for block counts and offsets in the Triton kernel
      - Modify kernel grid and computation logic for improved performance
      - Update example script to use naive_nsa_simple reference implementation
      - Improve type hints and kernel configuration
      b6c48453
    • Lei Wang's avatar
      [Refactor] Replace `T.thread_binding` with `T.get_thread_binding` in examples and test cases (#163) · de1ba1e4
      Lei Wang authored
      * [Refactor] Update BitBLAS Benchmark with TileLang Carver Imports and Roller Hints Generation
      
      - Replace BitBLAS imports with TileLang Carver imports in benchmark_matmul.py
      - Modify roller hints generation using new TileLang Carver template and utility functions
      - Update get_roller_hints_from_func to handle None cases and improve return logic
      - Adjust DefaultPolicy to handle different codegen dictionary formats
      
      * [Refactor] Update Thread Binding and Import Statements in TileLang Kernels
      
      - Replace T.thread_binding() with T.get_thread_binding() across multiple kernel test files
      - Update import statements for MMA layout and macro generator in dequantize GEMM and FP8 examples
      - Move map_torch_type utility function to tilelang.utils.tensor
      - Remove unnecessary imports and improve code organization
      de1ba1e4
    • Wenhao Xie's avatar
      [Enhancement] Improve CUDA path detection (#157) · 901deae1
      Wenhao Xie authored
      * [Typo] Fix formatting in installation instructions in README.md
      
      * [Enhancement] Improve CUDA path detection and update configuration handling
      
      * fix typo
      
      * remove IS_WINDOWS constant
      
      * lint fix
      
      * Improve error messages for CUDA detection failure
      
      * lint fix
      
      * lint fix
      
      * Fix .gitignore to correctly include venv directory
      901deae1
  6. 06 Mar, 2025 8 commits
    • Lei Wang's avatar
      Refactor MLA decode kernel: Replace T.If with native Python if statement (#162) · cfcbcf1e
      Lei Wang authored
      Simplify the control flow in the MLA decode kernel by replacing TileLang's T.If construct with a standard Python if statement. This change improves code readability and maintains the existing logic for handling sequence length constraints during block-wise computation.
      cfcbcf1e
    • Chaofan Lin's avatar
      [Carver] Multi-Threads Compilation for Fast Auto Tuning (#156) · 18be9e07
      Chaofan Lin authored
      * [Carver] Multi-Threads Compilation for Fast Auto Tuning
      
      * Add progress bar for compilation
      
      * lint
      18be9e07
    • xs-keju's avatar
      Add cpu jit with backend ctypes (#154) · 782ca9f6
      xs-keju authored
      
      
      * Add cpu jit with backend ctypes
      
      * Resolve some lint issues
      
      * Apply PR feedback on head file and kernel example
      
      * Add test cases
      
      * Resolve formatting issues
      
      * Resolve formatting issues
      
      ---------
      Co-authored-by: default avatarxxw <1990389406@qq.con>
      782ca9f6
    • Lei Wang's avatar
      Add libstdcxx-ng-12 to Dockerfiles for CUDA versions (#160) · 3486e27e
      Lei Wang authored
      Update Dockerfiles for CUDA 118, 120, 121, 123, 124, 125, and 126 to install libstdcxx-ng-12 from conda-forge, ensuring consistent standard library support across different CUDA versions
      3486e27e
    • Lei Wang's avatar
      [Release] Bump Version to v0.1.2 (#155) · 237dab0d
      Lei Wang authored
      * Remove Torch CPP backend and update execution backend options
      
      - Remove TorchCPPKernelAdapter and related code from JIT modules
      - Update execution backend options in jit/__init__.py, kernel.py, and adapter/__init__.py
      - Remove "torch_cpp" from supported execution backend literals
      - Simplify backend validation and remove unused torch_cpp-related code
      。
      
      * lint fix
      
      * Add block sparse attention implementations for TileLang and Triton
      
      - Implement block sparse attention kernels for TileLang and Triton
      - Add example scripts for block sparse attention with top-k and threshold-based masking
      - Include utility functions for generating sparse attention masks
      - Demonstrate causal attention with block-level sparsity
      - Add test cases to validate sparse attention implementations against PyTorch reference
      
      * Bump version to 0.1.1
      
      * Bump version to 0.1.2
      237dab0d
    • Yu Cheng's avatar
      [Dev][Benchmark] Add MLA paged decoding example and benchmark script (#158) · be9abf18
      Yu Cheng authored
      * [Dev] Adjust computation logic to avoid precision loss when casting acc_s from float to float16
      
      - Remove redundant `acc_s_0` fragment in flash attention kernel
      - Simplify memory copy and reduction operations
      - Reorder memory copy and scaling steps for improved performance
      - Add Hopper-specific synchronization method in CUDA reduce template
      - Update reduce operation to use architecture-specific synchronization
      
      * [Dev] Add DeepSeek MLA Decoding (Paged+Varlen) kernel and Performance Benchmark Script
      
      - Implement comprehensive MLA (Multi-Head Latent Attention) decoding benchmark script
      - Add support for multiple implementations: Torch, TileLang, FlashMLA, FlashInfer, and Triton
      - Create flexible configuration for benchmarking different batch sizes, sequence lengths, and head configurations
      - Implement performance comparison and CSV output for detailed performance analysis
      - Add command-line argument support for targeted benchmarking and comparison
      
      * [Dev] Refactor MLA Paged Decoding Kernel with Improved Block Handling and Precision
      
      - Replace `d` parameter with `dv` to clarify value dimension in MLA decoding
      - Enhance block distribution logic for split KV processing
      - Improve handling of remaining blocks in split KV computation
      - Add initialization of `lse_max_local` to prevent potential precision issues
      - Optimize block start and range calculations for more accurate sequence processing
      
      * lint
      be9abf18
    • Lei Wang's avatar
      [Carver] Enhance Carver Adaptation for MatMul Benchmarking (#153) · 3c53297b
      Lei Wang authored
      * [Refactor] Consolidate GemmWarpPolicy Enum and Add Utility Method
      
      - Move GemmWarpPolicy from copy.py and gemm.py to primitives/gemm/base.py
      - Implement from_warp_partition class method to determine warp policy
      - Add docstring with examples for policy determination
      - Remove duplicate GemmWarpPolicy class definitions
      
      * [Enhancement] Add TensorCore Intrinsic Matrix Multiplication Benchmarks
      
      - Implement two new matrix multiplication benchmark scripts:
        1. `benchmark_matmul_intrinsic.py`: Uses TensorCore intrinsics with advanced configuration
        2. `benchmark_matmul.py`: Provides a more generic matrix multiplication benchmark
      
      - Add support for roller-based configuration generation in both benchmarks
      - Enhance MMA macro generator to handle 2D and 4D output buffer layouts
      - Implement flexible autotuning configurations with multiple parameters
      - Support different data types and accumulation modes
      - Add command-line arguments for matrix dimensions and roller configuration
      
      * lint fix
      
      * Fix roller hints generation in get_roller_hints_from_func
      
      - Simplify roller hints generation logic
      - Ensure policy-based configuration is always emitted when a policy is available
      - Remove redundant None check for roller hints
      
      * Add shared memory for matrix multiplication in benchmark and quickstart examples
      
      - Modify benchmark_matmul.py and quickstart.py to include C_shared allocation
      - Change accumulation dtype from float16 to float in benchmark_matmul.py
      - Update matrix multiplication kernels to use shared memory for result storage
      - Enable CUDA kernel source printing in quickstart example
      3c53297b
    • Lei Wang's avatar
      [Enhancement] Optimize TileLang Build Process with Dynamic CPU Core Allocation (#152) · e945dae2
      Lei Wang authored
      - Calculate 75% of available CPU cores for make jobs
      - Prevent system unresponsiveness during build
      - Dynamically adjust make job count based on system resources
      e945dae2
  7. 05 Mar, 2025 6 commits
    • Chaofan Lin's avatar
      8b9edc3e
    • Lei Wang's avatar
      [Refactor] Update BitBLAS Benchmark with TileLang Carver Imports and Roller Hints Generation (#148) · 0e2eae42
      Lei Wang authored
      - Replace BitBLAS imports with TileLang Carver imports in benchmark_matmul.py
      - Modify roller hints generation using new TileLang Carver template and utility functions
      - Update get_roller_hints_from_func to handle None cases and improve return logic
      - Adjust DefaultPolicy to handle different codegen dictionary formats
      0e2eae42
    • Lei Wang's avatar
      [Enhancement] Enable runtime tensor data type validation (#146) · d0434c3e
      Lei Wang authored
      * Fix debug print buffer template for unsigned char type
      
      - Update debug_print_buffer_value template specialization for unsigned char
      - Modify test_tilelang_debug_print.py to include additional dtype tests
      - Add test case for uint8 dtype in debug print buffer function
      
      * Refactor debug print buffer template formatting for unsigned char
      
      - Improve code formatting for debug_print_buffer_value template specialization
      - Adjust line breaks and indentation for better readability
      - Maintain consistent code style with other template specializations
      
      * Extract map_torch_type utility function to tilelang.utils.tensor
      
      - Move map_torch_type function from multiple test files to a centralized location
      - Import map_torch_type from tilelang.utils.tensor in kernel test files
      - Improve code reusability by creating a shared utility function for type mapping
      
      * Add buffer dtype mapping for Cython kernel adapter
      
      - Introduce buffer_dtype_map in CythonKernelAdapter to track buffer variable dtypes
      - Add _process_buffer_dtype method to extract dtype information from TIR function
      - Update CythonKernelWrapper to support setting and validating buffer dtypes
      - Enhance type checking during kernel execution with dtype verification
      - Improve logging message for Cython JIT adapter compilation
      
      * Add static shape mapping for Cython kernel adapter
      
      - Introduce static_shape_map in CythonKernelAdapter to track buffer variable static shapes
      - Add _process_static_shape method to extract static shape information from TIR function
      - Update CythonKernelWrapper to support setting and validating static shapes
      - Enhance type checking during kernel execution with static shape verification
      
      * Add Multi-Head Attention (MHA) Backward Pass Test for TileLang Kernel
      
      - Implement comprehensive test for Multi-Head Attention backward pass
      - Support both causal and non-causal attention scenarios
      - Add reference implementation for comparing kernel outputs
      - Test different batch sizes, head counts, sequence lengths, and head dimensions
      - Verify forward and backward pass correctness using torch.testing.assert_close
      
      * Set random seed for MHA backward pass test
      
      - Add random seed initialization for consistent test reproducibility
      - Use tilelang.testing.set_random_seed(42) to ensure deterministic test results
      d0434c3e
    • Lei Wang's avatar
      [Enhancement] Support debug print for unsigned char datatype (#145) · bb60f6ce
      Lei Wang authored
      * Fix debug print buffer template for unsigned char type
      
      - Update debug_print_buffer_value template specialization for unsigned char
      - Modify test_tilelang_debug_print.py to include additional dtype tests
      - Add test case for uint8 dtype in debug print buffer function
      
      * Refactor debug print buffer template formatting for unsigned char
      
      - Improve code formatting for debug_print_buffer_value template specialization
      - Adjust line breaks and indentation for better readability
      - Maintain consistent code style with other template specializations
      bb60f6ce
    • Lei Wang's avatar
      [Refactor] Rename gemm fp8 example as we currently lack `T.gemm` support for fp8 (#144) · 37d44f24
      Lei Wang authored
      * Change default log level from WARNING to INFO in TileLang initialization
      
      * Refactor Flash Attention Variable-Length MHA Example with Cython Backend Support
      
      - Update `example_mha_fwd_varlen.py` to use Cython backend for kernel compilation
      - Remove unused imports and simplify function signature
      - Modify `flashattn` function to handle max sequence length as a separate argument
      - Update kernel call to include max sequence length parameter
      - Improve code readability and remove commented-out code
      - Add print statement to confirm successful assertion
      
      * Refactor code formatting in TileLang lowering and example files
      
      - Improve line breaks and code formatting in `lower.py`, `wrapper.py`, and `tensor.py`
      - Simplify line breaks and reduce unnecessary whitespace
      - Enhance code readability by adjusting indentation and line breaks
      - Update example MHA forward pass script with cleaner tensor initialization
      
      * Update TileLang kernel test with import path changes for MMA layout and macro generator
      
      - Modify import statements in test_tilelang_kernel_dequantize_gemm.py
      - Replace bitblas imports with tilelang.intrinsics imports for MMA-related utilities
      - Update main function to use tilelang.testing.main()
      
      * Add Block Sparse Attention Examples for TileLang and Triton
      
      - Implement block sparse attention kernels for both TileLang and Triton
      - Add utility functions for generating sparse attention masks using top-k and threshold methods
      - Support causal and variable-length attention scenarios
      - Include test cases for different sequence length configurations
      - Demonstrate block-level sparse attention with configurable parameters
      
      * Refactor Block Sparse Attention Examples with Code Style Improvements
      
      - Improve code formatting in block_sparse_attn_tilelang.py and block_sparse_attn_triton.py
      - Enhance readability by adjusting line breaks and indentation
      - Simplify kernel and function calls with better formatting
      - Add whitespace and line break improvements for better code clarity
      
      * Enhance Layout Plotting with Multi-Replication and Dynamic Visualization
      
      - Update plot_layout function to support multiple replications in thread and value mapping
      - Improve thread and value mapping to handle replicated layouts
      - Dynamically adjust figure size and legend positioning
      - Add print statements for saved plot file paths
      - Modify example fragment_mma_load_a.py to uncomment and enable warp and block layout plotting
      
      * Refactor AtomicAdd functions in CUDA common header
      
      - Implement a generic template for AtomicAdd function
      - Specialize templates for half_t, bfloat16_t, and pointer types
      - Reorganize and clean up existing AtomicAdd implementations
      - Improve type handling and conversion in atomic operations
      
      * Remove unused import in MHA backward test file
      
      - Remove unnecessary argparse import from test_tilelang_kenrel_mha_bwd.py
      - Add blank line for improved code formatting
      - Minor code cleanup in test file
      
      * Add FP8 GEMM Example with TensorCore Intrinsics
      
      - Implement a new example for FP8 matrix multiplication using TensorCore intrinsics
      - Support E4M3 and E5M2 floating-point 8-bit data types
      - Add README with notes on current FP8 implementation limitations
      - Include correctness test for FP8 GEMM with different configurations
      - Demonstrate swizzle layout and pipeline optimizations for FP8 computation
      37d44f24
    • Yu Cheng's avatar
      [Dev] Adjust computation logic to avoid precision loss when casting acc_s from... · e1d82bf3
      Yu Cheng authored
      [Dev] Adjust computation logic to avoid precision loss when casting acc_s from float to float16 (#141)
      
      - Remove redundant `acc_s_0` fragment in flash attention kernel
      - Simplify memory copy and reduction operations
      - Reorder memory copy and scaling steps for improved performance
      - Add Hopper-specific synchronization method in CUDA reduce template
      - Update reduce operation to use architecture-specific synchronization
      e1d82bf3
  8. 04 Mar, 2025 3 commits
    • Yu Cheng's avatar
      [Dev][Doc] Enhance Flash Attention Implementation in GQA Decoding Example and Fix Typo (#139) · 3d7b2dc5
      Yu Cheng authored
      - Add non-split flash attention macro for more flexible kernel generation
      - Implement `main_no_split` function to handle single-split scenarios
      - Modify kernel selection logic to dynamically choose between split and non-split implementations
      3d7b2dc5
    • Lei Wang's avatar
      [Bugfix] Add missing definition for AtomicAdd (#138) · 3960d3d0
      Lei Wang authored
      * Change default log level from WARNING to INFO in TileLang initialization
      
      * Refactor Flash Attention Variable-Length MHA Example with Cython Backend Support
      
      - Update `example_mha_fwd_varlen.py` to use Cython backend for kernel compilation
      - Remove unused imports and simplify function signature
      - Modify `flashattn` function to handle max sequence length as a separate argument
      - Update kernel call to include max sequence length parameter
      - Improve code readability and remove commented-out code
      - Add print statement to confirm successful assertion
      
      * Refactor code formatting in TileLang lowering and example files
      
      - Improve line breaks and code formatting in `lower.py`, `wrapper.py`, and `tensor.py`
      - Simplify line breaks and reduce unnecessary whitespace
      - Enhance code readability by adjusting indentation and line breaks
      - Update example MHA forward pass script with cleaner tensor initialization
      
      * Update TileLang kernel test with import path changes for MMA layout and macro generator
      
      - Modify import statements in test_tilelang_kernel_dequantize_gemm.py
      - Replace bitblas imports with tilelang.intrinsics imports for MMA-related utilities
      - Update main function to use tilelang.testing.main()
      
      * Add Block Sparse Attention Examples for TileLang and Triton
      
      - Implement block sparse attention kernels for both TileLang and Triton
      - Add utility functions for generating sparse attention masks using top-k and threshold methods
      - Support causal and variable-length attention scenarios
      - Include test cases for different sequence length configurations
      - Demonstrate block-level sparse attention with configurable parameters
      
      * Refactor Block Sparse Attention Examples with Code Style Improvements
      
      - Improve code formatting in block_sparse_attn_tilelang.py and block_sparse_attn_triton.py
      - Enhance readability by adjusting line breaks and indentation
      - Simplify kernel and function calls with better formatting
      - Add whitespace and line break improvements for better code clarity
      
      * Enhance Layout Plotting with Multi-Replication and Dynamic Visualization
      
      - Update plot_layout function to support multiple replications in thread and value mapping
      - Improve thread and value mapping to handle replicated layouts
      - Dynamically adjust figure size and legend positioning
      - Add print statements for saved plot file paths
      - Modify example fragment_mma_load_a.py to uncomment and enable warp and block layout plotting
      
      * Refactor AtomicAdd functions in CUDA common header
      
      - Implement a generic template for AtomicAdd function
      - Specialize templates for half_t, bfloat16_t, and pointer types
      - Reorganize and clean up existing AtomicAdd implementations
      - Improve type handling and conversion in atomic operations
      
      * Remove unused import in MHA backward test file
      
      - Remove unnecessary argparse import from test_tilelang_kenrel_mha_bwd.py
      - Add blank line for improved code formatting
      - Minor code cleanup in test file
      3960d3d0
    • Yu Cheng's avatar
      [Doc] Add MLA Decoding Performance Benchmarks and Documentation (#137) · e89e8b6c
      Yu Cheng authored
      - Update news and MLA performance benchmark in README.md
      - Move performance benchmark and layout images to a dedicated 'figures' directory
      - Improve code formatting and image references in documentation
      e89e8b6c
  9. 03 Mar, 2025 2 commits
    • Lei Wang's avatar
      [Debug] Improve Memory Layout Plot (#136) · e32311b2
      Lei Wang authored
      * Change default log level from WARNING to INFO in TileLang initialization
      
      * Refactor Flash Attention Variable-Length MHA Example with Cython Backend Support
      
      - Update `example_mha_fwd_varlen.py` to use Cython backend for kernel compilation
      - Remove unused imports and simplify function signature
      - Modify `flashattn` function to handle max sequence length as a separate argument
      - Update kernel call to include max sequence length parameter
      - Improve code readability and remove commented-out code
      - Add print statement to confirm successful assertion
      
      * Refactor code formatting in TileLang lowering and example files
      
      - Improve line breaks and code formatting in `lower.py`, `wrapper.py`, and `tensor.py`
      - Simplify line breaks and reduce unnecessary whitespace
      - Enhance code readability by adjusting indentation and line breaks
      - Update example MHA forward pass script with cleaner tensor initialization
      
      * Update TileLang kernel test with import path changes for MMA layout and macro generator
      
      - Modify import statements in test_tilelang_kernel_dequantize_gemm.py
      - Replace bitblas imports with tilelang.intrinsics imports for MMA-related utilities
      - Update main function to use tilelang.testing.main()
      
      * Add Block Sparse Attention Examples for TileLang and Triton
      
      - Implement block sparse attention kernels for both TileLang and Triton
      - Add utility functions for generating sparse attention masks using top-k and threshold methods
      - Support causal and variable-length attention scenarios
      - Include test cases for different sequence length configurations
      - Demonstrate block-level sparse attention with configurable parameters
      
      * Refactor Block Sparse Attention Examples with Code Style Improvements
      
      - Improve code formatting in block_sparse_attn_tilelang.py and block_sparse_attn_triton.py
      - Enhance readability by adjusting line breaks and indentation
      - Simplify kernel and function calls with better formatting
      - Add whitespace and line break improvements for better code clarity
      
      * Enhance Layout Plotting with Multi-Replication and Dynamic Visualization
      
      - Update plot_layout function to support multiple replications in thread and value mapping
      - Improve thread and value mapping to handle replicated layouts
      - Dynamically adjust figure size and legend positioning
      - Add print statements for saved plot file paths
      - Modify example fragment_mma_load_a.py to uncomment and enable warp and block layout plotting
      e32311b2
    • Yu Cheng's avatar
      [Doc] Update MLA Documentation (#135) · b70683b3
      Yu Cheng authored
      b70683b3