• Lei Wang's avatar
    [Refactor] Support auto index bitwidth casting (#517) · 6ad73f6f
    Lei Wang authored
    * [Refactor] Enhance GEMM Warp Partitioning Logic and Introduce Buffer Remapping (#516)
    
    * Improved the warp partitioning logic in `Gemm::ComputeWarpPartition` to better accommodate various GEMM policies, including FullRow, FullCol, and Square, ensuring optimal performance based on matrix dimensions.
    * Introduced a new `RemapBufferRewriter` class to handle buffer reference updates and padding annotations during statement transformations, enhancing memory access safety and clarity.
    * Updated the `OptimizeForTarget` function to include a new step for configuring index bitwidth, improving the overall optimization process.
    * Refactored existing code to utilize constants for warp sizes, enhancing maintainability and readability.
    * Added checks to ensure correct warp allocation and padding map handling, improving robustness in memory management strategies.
    
    * [Refactor] Update ConfigIndexBitwidthRewriter to Support Auto-Check Feature
    
    * Modified the constructor of `ConfigIndexBitwidthRewriter` to include an `auto_check` parameter, allowing for dynamic bitwidth adjustments based on input conditions.
    * Enhanced the `VisitExpr_` methods to apply the new auto-check logic, ensuring that integer types are upgraded to 64 bits when necessary, or to a specified index bitwidth otherwise.
    * Updated the `ConfigIndexBitwidth` pass to determine the index bitwidth based on the presence of configuration, improving flexibility in handling different scenarios.
    
    * Add dynamic matrix multiplication example and corresponding test
    
    * Introduced `example_dynamic.py` to demonstrate dynamic matrix multiplication using TileLang and PyTorch, including a main function for execution and performance profiling.
    * Added `test_example_dynamic.py` to validate the functionality of the dynamic matrix multiplication example.
    * The example includes detailed parameter configurations and checks against PyTorch's implementation for correctness.
    
    * lint fix
    
    * Add get_num_sms function to retrieve the number of streaming multiprocessors on the CUDA device
    
    * Implemented the `get_num_sms` function in `cuda_driver.py` to return the count of streaming multiprocessors for a specified CUDA device.
    * Updated the `__init__.py` file to include the new function in the module exports.
    
    * lint fix
    6ad73f6f
test_example_dynamic.py 161 Bytes