"src/transform/vscode:/vscode.git/clone" did not exist on "68989d80858d8b034137330cf4fce1165a5db933"
Unverified Commit a7c9a8b9 authored by Siyuan Feng's avatar Siyuan Feng Committed by GitHub
Browse files

Refactor to support upstream tvm (#595)



**Summarize part of the rebase pr:**

1. **Support T.thread_return() → CUDA return syntax**  
   Added support for translating `T.thread_return()` to CUDA's native `return` statement.

2. **Dynamic type support for function inputs**  
   Functions now accept dynamically typed parameters using `typing`:
   ```python
   dyn_type = T.int32 or T.float
   @T.prim_func
   def main(
       a: dyn_type,
   )
   ```

3. **Device Function Codegen**  
   Added support for generating `__device__` functions in CUDA:
   ```python
   @I.ir_module
   class Module:
       @T.prim_func(private=True)
       def add(a: T.int32, b: T.int32) -> T.int32:
           return a + b

       @T.prim_func
       def main(
           A: T.Buffer((128, 128), "int32"),
           B: T.Buffer((128, 128), "int32"),
           C: T.Buffer((128, 128), "int32"),
       ):
           T.func_attr({"global_symbol": "main"})
           length: T.int32 = Module.add(64, 64)  # Host call
           for bx in T.thread_binding(length, "blockIdx.x"):
               for tx in T.thread_binding(length, "threadIdx.x"):
                   C[bx, tx] = Module.add(A[bx, tx], B[bx, tx])  # Device call
   ```
   After compilation, `add` becomes a CUDA `__device__` function.

4. **Cython-based Python/C++ interop**  
   Replaced ctypes with Cython for all Python/C++ interactions:
   - Python → C++ calls
   - C++ → Cython calls  
   This improves performance by around 100x and reduces CPU overhead during compile/runtime.

5. **FP8 data type standardization**  
   Migrated `e5m2_float8` and similar types to Torch-standardized variants`float8_e5m2` and etc.



* Refactor CMakeLists.txt to set default build type and manage dependencies for tvm_cython modules

* Update default value of `check_well_formed` parameter in `prim_func` to False for improved flexibility in TIR function parsing.

* Add StorageRewrite function to transform module

Introduced the StorageRewrite function in the tilelang.transform module, which returns a TVM transform pass. This addition enhances the functionality of the module by providing a new transformation option for users.

* Refactor null option handling in IR and layout inference

- Updated instances of `NullOpt` to `std::nullopt` in `ir.cc` and `parallel.cc` for consistency with modern C++ practices.
- Enhanced layout inference logic in `layout_inference.cc` to improve type safety by replacing `as<Fragment>().get()` with `as<FragmentNode>()`.
- Adjusted error handling in `multi_version_buffer_rewriter.cc` and `persist_threadblock.cc` to use more concise null checks.
- Cleaned up test files by commenting out `tilelang.testing.main()` and replacing it with specific test function calls for better clarity.
- Removed unused test file `test_tilelang_kernel_deepseek_nsa.py` to streamline the testing suite.

* Update TVM subproject and refactor cluster planning and tile operation handling

- Updated the TVM subproject to a dirty commit state.
- Refactored copyright headers in `cluster_planning.cc` to reflect the new licensing.
- Enhanced error handling in `lower_tile_op.cc` to check for missing padding map annotations.
- Modified test files to improve clarity and functionality, including adjustments to kernel compilation and test assertions.
- Updated various test cases to ensure proper handling of annotations and configurations in the TileLang testing framework.

* Update annotation type in warp specialized test for consistency

- Changed the annotation type in the `test_warp_specialized` function from a literal integer to `T.int32(3)` for improved type safety and consistency with the TileLang framework.

* Refactor test execution in warp specialized test

- Replaced the direct call to `test_warp_specialized()` with `tilelang.testing.main()` in the test file to standardize test execution and improve integration with the TileLang testing framework.

* refactor

* [Enhancement] Add strict layout map for improved buffer layout inference (#594)

- Introduced a `strict_layout_map` to enhance layout inference by ensuring that buffers with strict layout requirements are properly accounted for during the inference process.
- Updated the inference logic to check for the presence of buffers in the `strict_layout_map` before applying layout changes, improving the accuracy of layout assignments.
- Refactored the layout inference steps to include the copying of layouts into the new strict map, ensuring a clear separation of layout handling based on inference levels.

* [Example] Update examples to use @tilelang.jit (#597)

* [Example] Update kernel compilation in examples to use @tilelang.jit

- Refactored multiple examples to eliminate the use of `tilelang.compile` for kernel creation, directly invoking the functions instead.
- Added `@tilelang.jit` decorators with appropriate output indices to enhance performance and maintainability.
- Improved code clarity by simplifying the kernel invocation process across various examples, ensuring consistency in how kernels are defined and executed.

* format

* Update example_tilelang_sparse_gqa_decode_varlen_indice.py

* Update example_dequant_gemm_fine_grained.py

* Update example_gemm_autotune.py

---------
Co-authored-by: default avatarLei Wang <34334180+LeiWang1999@users.noreply.github.com>

* [Enhancement] Refine error messaging in LowerBulkCopy for global and shared range checks (#599)

* [Enhancement] Improve error messaging for global and shared range legality checks in LowerBulkCopy

- Updated error messages in the LowerBulkCopy function to provide clearer context when global and shared ranges are illegal.
- Enhanced the readability of the error output by including tensor names, improving debugging and validation processes during bulk copy operations.

* [Enhancement] Refine error messaging in LowerBulkCopy for global and shared range checks

- Improved the clarity of error messages in the LowerBulkCopy function by enhancing the output format.
- Included additional context in error messages to aid debugging when global and shared ranges are found to be illegal, ensuring better traceability during bulk copy operations.

* [Enhancement] Introduce PassConfig `TL_ENABLE_AGGRESSIVE_SHARED_MEMORY_MERGE` to enable aggressive shared memory reuse (#602)

* [Enhancement] Add aggressive shared memory merge option in memory allocation

- Introduced a new configuration option `tl.enable_aggressive_shared_memory_merge` to enable aggressive merging of shared memory allocations.
- Updated the `SharedMemLinearAccessPatternFinder` class to support an aggressive merge strategy, allowing for improved memory reuse.
- Modified the `MergeSharedMemoryAllocations` function to incorporate the new merging strategy based on the configuration.
- Enhanced the `PassConfigKey` enumeration to include the new aggressive merge option, ensuring it can be configured appropriately.

* lint fix

* [Enhancement] Add aggressive shared memory merge configuration option

- Introduced a new configuration option `kEnableAggressiveSharedMemoryMerge` to enable aggressive merging of shared memory allocations, enhancing memory management capabilities.

* [Enhancement] Update MergeSharedMemoryAllocations to support aggressive merge option

- Modified the `MergeSharedMemoryAllocations` function to accept an `enable_aggressive_merge` parameter, allowing for more flexible memory management.
- Introduced a new helper function `should_enable_aggressive_merge` to determine the aggressive merge configuration based on the pass context and target.
- Updated the relevant calls in the `phase.py` and `__init__.py` files to utilize the new aggressive merge functionality, enhancing the overall memory allocation strategy.

* [Refactor] Update accumulation handling in gemm_sm90.h (#603)

- Replaced the use of `tiled_mma.accumulate_ = GMMA::ScaleOut::Zero` with a call to `clear(acc)` for better clarity and maintainability in the accumulation logic.
- This change enhances the readability of the code by standardizing the approach to clearing accumulation values across multiple sections of the file.

* [Enhancement] Add tma bulk copy. (#600)

* [Bugfix] Fixed mha_bwd shape inconsistency error (#604)

* lint fix

* Update requirements-lint.txt to maintain clang-format version consistency

* [Bugfix] Avoid duplicate data access when cross thread buffer meet replicate register (#606)

* [Enhancement] Improve debug output formatting in layout and fragment nodes

- Updated the `DebugOutput` methods in `LayoutNode` and `FragmentNode` to provide more structured and informative output, including transformation details and thread range information.
- Enhanced layout inference logic in `ParallelOp` to add predicates for cross-thread shared memory access, improving layout handling in parallel operations.
- Minor adjustment in `layout_inference.cc` to ensure clarity in parallel loop handling.

* lint fix

* [Enhancement] Support tf32 gemm_rs (#607)

- Added a line break in `quickstart.py` for better readability.
- Simplified the JIT kernel compilation in `quickstart.py` by removing the unused execution backend option.
- Modified `example_elementwise_add.py` to disable cache for `tilelang` and optimized the element-wise addition kernel by utilizing shared memory for input tensors, improving performance.
- Updated default values for matrix dimensions and block sizes in the argument parser to enhance usability.

* [Enhancement] Introduce option `TL_DISABLE_FAST_MATH` and `TL_ENABLE_PTXAS_VERBOSE_OUTPUT` (#609)

* [Enhancement] Introduce new PassConfig options for fast math and PTXAS verbosity

- Added `kDisableFastMath` and `kEnablePTXASVerboseOutput` configuration options to enhance control over compilation settings.
- Updated `LibraryGenerator` to utilize these new pass configurations, allowing for more flexible compilation behavior based on user preferences.
- Enhanced `PassConfigKey` enumeration to include the new options, ensuring they can be configured appropriately in the pass context.

* [Refactor] Update PTXAS verbosity configuration key in LibraryGenerator

- Changed the configuration key for PTXAS verbosity from `TL_VERBOSE_PTXAS_OUTPUT` to `TL_ENABLE_PTXAS_VERBOSE_OUTPUT` to align with the new naming convention introduced in recent enhancements.
- This update ensures consistency in the configuration options used within the `LibraryGenerator` class, improving clarity and maintainability of the code.

* lint fix

* fix build

* [Experimental][Language] add `T.GEMM_SP` for sm90 sparse tensor core (#526)

* [experimental] add a draft gemm_sp

* [3rdparty] bump cutlass to v3.9.3

* [lint] run format.sh

* [chore] rebase

* [chore] use abs path

* [gemm_sp] add metadata layout

* [ci] add more example

* [lint] run format.sh

* [chore] polish

* [chore] move gemm_sp to experimental

* [chore] polish

* [lint] run format.sh

* [Enhancement] Improve bulk copy handling and update GEMM sparse tensor test

* Added a warning log for unsupported non-swizzled global layouts in the bulk copy operation, ensuring fallback to normal copy.
* Refactored the GEMM sparse tensor test by removing unnecessary imports and simplifying the kernel compilation process.
* Updated the test to directly call the `run_gemm_sp` function, enhancing clarity and functionality.

* Implement Test

* [Enhancement] Update GEMM SP and SM89 templates for improved functionality

* Refactored GEMM SP computation to enhance warp partitioning logic, ensuring compatibility with Hopper architecture.
* Updated layout inference to support new WGMMA conditions and improved error messaging for unsupported targets.
* Modified SM89 templates to utilize new MMA atom structures, enhancing performance and compatibility with fp8 types.
* Added conditional inclusion for GEMM SP header based on CUDA architecture version.

* lint fix

* [gemm_sp] support more layout and data types

* Enhancement: sync T.gemm_sp's layout inference with T.gemm

* Enhancement: support more block_k in compress util

* [Enhancement] enable block_k=64

* [Lint] run format.sh

* [Enhancement] compressor support more dtype

* Enhancement: enable block_K=32

* [Lint] format.sh

* [Fixbug] fix shape

* Refactor: sync gemm

* [Enhancement] enable transpose

* [Enhancement] enable fp8_e4m3

* [Enhancement] enable int8

* [Lint] run format.sh

* [Benchmark] add gemm_sp benchmark

* [Example] fix 256 threads hang

* [CI] fix ci

* [Chore] resolve gemini feedback

* [Benchmark] increase search space

* [Lint] format

* [CI] skip sparse tensor core related tests as only sm90 is supported

* [CI] pass local run

* Update gemm_sm89.h

* lint fix

* lint fix

* [Enhancement] Add support for sparse GEMM and initialize CUDA architecture flags

- Introduced a new boolean flag `enable_sparse_gemm_` to control the inclusion of sparse GEMM functionality in CUDA code generation.
- Updated the `Finish` method to conditionally include the sparse GEMM header based on the new flag.
- Implemented logic in `VisitStmt_` to enable sparse GEMM when the corresponding external call is detected.
- Added a function to initialize the `TORCH_CUDA_ARCH_LIST` environment variable based on the target compute version, enhancing compatibility with PyTorch.
- Refactored the initialization function into the appropriate module and ensured it is called in the sparse utilities module.

* Update test_compress_utils.py

---------
Co-authored-by: default avatarLeiWang1999 <leiwang1999@outlook.com>
Co-authored-by: default avatarLei Wang <34334180+LeiWang1999@users.noreply.github.com>

* [Doc] Phaseout Legacy documentations (#610)

- Added a new entry in the README for the introduction of `T.gemm_sp` supporting 2:4 sparse tensor core.
- Removed several outdated documentation files related to convolution, flash attention, and other tutorials to streamline the documentation structure.

* [Refactor] Phaseout Pass ParallelLoopTransformer (#611)

* Refactor layout inference by removing the ParallelLoopTransformer class. Updated layout inference logic to streamline buffer access collection and condition handling in parallel loops. This change simplifies the code structure and enhances maintainability.

* Update MHA backward test cases to use reduced dimensions for batch size and context length

* fix build

* [Enhancement] Update ReduceOp initialization values for integer types (#614)

* [Enhancement] Update ReduceOp initialization values for integer types

- Modified the `MakeInitValue` method in `ReduceOp` to handle integer data types correctly by returning appropriate minimum and maximum values based on the bit width.
- Added checks for integer types to ensure correct initialization for `kMax` and `kMin` reduction types, enhancing the robustness of the reduction operations.

* [Enhancement] Update ReduceOp to handle unsigned integer initialization values

- Enhanced the `MakeInitValue` method in `ReduceOp` to include support for unsigned integer data types.
- Added conditions to return appropriate initialization values for `kMax` and `kMin` reduction types based on the data type, improving the robustness of reduction operations.

* Bump transformers from 4.50.0 to 4.51.0 in /examples/bitnet-1.58b (#615)

Bumps [transformers](https://github.com/huggingface/transformers) from 4.50.0 to 4.51.0.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](https://github.com/huggingface/transformers/compare/v4.50.0...v4.51.0

)

---
updated-dependencies:
- dependency-name: transformers
  dependency-version: 4.51.0
  dependency-type: direct:production
...
Signed-off-by: default avatardependabot[bot] <support@github.com>
Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* [Refactor] refactor autotune examples (#617)

* [Refactor] Update tilelang kernel functions and remove unused imports

- Refactored the `flashattn_fwd`, `flashattn_bwd_preprocess`, and `flashattn_bwd_postprocess` functions to utilize direct kernel calls instead of cached versions, improving clarity and performance.
- Added `@tilelang.jit` decorators with specified output indices to enhance kernel compilation.
- Removed unused import of `cached` from `tilelang`, streamlining the code.
- Commented out the main testing function call in `test_tilelang_kernel_mha_bwd.py` for potential future use.

* [Refactor] Simplify configuration generation in benchmark and example scripts

- Refactored the `get_configs` functions in multiple benchmark and example scripts to utilize a dictionary-based approach for parameter configuration, improving readability and maintainability.
- Updated the `flashattn` and `chunk_scan_fwd` functions to directly accept configuration parameters, enhancing flexibility in kernel tuning.
- Removed redundant code and streamlined the configuration generation process across various files, ensuring consistency in how configurations are defined and utilized.

* [Refactor] Update configuration handling in benchmark scripts

- Refactored the `get_configs` functions in benchmark scripts to accept a variable argument list, improving flexibility in configuration management.
- Enhanced the `matmul` and `flashattn` functions to utilize the updated configuration approach, streamlining parameter handling for kernel tuning.
- Added `@autotune` decorators to relevant functions, ensuring consistent autotuning behavior across benchmarks.
- Cleaned up redundant code and improved overall readability in the affected files.

* [Refactor] Clean up formatting and update subproject commit

- Updated the subproject commit reference in the TVM directory to indicate a dirty state.
- Removed unnecessary blank lines and improved formatting in the `benchmark_matmul` and `benchmark_matmul_fp8` scripts for better readability.
- Streamlined the function definitions in the `flashattn` example script to enhance clarity and maintainability.

* [Refactor] Update AutoTuner configuration handling

- Modified the AutoTuner class to check if kernel parameters are set before processing tunable arguments, improving robustness in configuration handling.
- Enhanced the logic for skipping compilation when tunable parameters are already provided, ensuring efficient use of resources.
- Updated comments for clarity and maintainability.

* lint fix

* Update TVM subproject commit to indicate dirty state and modify MHA backward test cases

- Updated the subproject commit reference in the TVM directory to reflect a dirty state.
- Adjusted the `test_mha_bwd` function to use a new configuration for the MHA backward tests, changing the context size from 128 to 256.
- Uncommented the main testing function call for potential execution.

* lint fix

* Bump transformers from 4.51.0 to 4.52.1 in /examples/bitnet-1.58b (#619)

Bumps [transformers](https://github.com/huggingface/transformers) from 4.51.0 to 4.52.1.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](https://github.com/huggingface/transformers/compare/v4.51.0...v4.52.1

)

---
updated-dependencies:
- dependency-name: transformers
  dependency-version: 4.52.1
  dependency-type: direct:production
...
Signed-off-by: default avatardependabot[bot] <support@github.com>
Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Fix PTXAS options flag in LibraryGenerator for consistency (#620)

* Refactor FP8 type handling across multiple files to standardize usage of "float8_e4m3" and "float8_e5m2" instead of "e4m3_float8" and "e5m2_float8". This includes updates in benchmarks, examples, tests, and internal utilities.

* [Refactor] Add parallel loop transform pass for condition extraction (#618)

* [Refactor] Add parallel loop transform

* done format check

* pull 3rdparty repo

* Refactor loop variable handling in transformation utilities

- Updated the logic in `loop_parallel_transform_utils.h` to simplify the handling of related loop variables.
- Removed the check that enforced a single related loop variable, replacing it with a return statement when multiple variables are detected, enhancing clarity and maintainability of the transformation process.

* Update loop_parallel_transform_utils.h

* Refactor loop variable handling in transformation utilities

- Enhanced the logic in `loop_parallel_transform_utils.h` to improve clarity and maintainability by simplifying the handling of related loop variables.
- Replaced the previous enforcement of a single related loop variable with a return statement for multiple variables detected.

* remove disable cache flag as commit id has been key component

* lint fix

---------
Co-authored-by: default avatarLeiWang1999 <leiwang1999@outlook.com>
Co-authored-by: default avatarLei Wang <34334180+LeiWang1999@users.noreply.github.com>

* [Dev] Update linear attention examples to enhance performance on Hopper GPUs (#621)

* Tune linear attention examples on H100

* Add retnet fwd kernel

* fix lint

* [Enhancement] Add ahead of time cython compilation in setup.py (#622)

* [Enhancement] Add Cython support and compiler detection in setup.py

- Introduced a new `CythonExtension` class for building Cython-based extensions, enhancing the build process for Cython projects.
- Implemented functions to detect the Cython compiler and C++ compiler, improving compatibility and user experience.
- Updated the build process to handle Cython extensions alongside CMake extensions, ensuring a seamless integration for users.
- Added caching mechanisms for Cython compilation to optimize build times and reduce unnecessary recompilation.

* [Enhancement] Add Cython dependency and enable CMake extension building

- Added Cython as a required dependency in `pyproject.toml` to support Cython-based extensions.
- Updated `setup.py` to enable building CMake extensions, improving the build process for projects utilizing both Cython and CMake.
- Modified the Cython compiler detection logic to streamline installation instructions for users.

* [Enhancement] Support more flexible layout host pythonic expr (#623)

* [Refactor] Enhance expression handling in utils.py and update wrapper to use pythonic_expr

- Added support for additional TIR expressions (FloorDiv, Min, Max, Add, Sub, FloorMod) in the pythonic_expr function to improve string representation.
- Replaced the deprecated legalize_c function calls in TLCUDASourceWrapper and TLCPUSourceWrapper with pythonic_expr for better expression handling in kernel launch code.

* [Refactor] Simplify expression handling in pythonic_expr function

- Consolidated binary and min/max operation handling in the pythonic_expr function to improve readability and maintainability.
- Replaced individual checks for binary operations with a mapping approach, streamlining the code and enhancing performance in expression representation.

* [Enhancement] Improve expression representation in pythonic_expr function

- Added operator precedence handling to the pythonic_expr function, enhancing the conversion of TVM PrimExpr to Python-style strings.
- Updated the visitor logic to intelligently add parentheses based on operator precedence, improving the accuracy of expression representation.
- Included a docstring for better clarity on the function's purpose and usage.

* test fix

* [Enhancement] support composable expression for shape with symbolic vars (#624)

* [Refactor] Enhance expression handling in utils.py and update wrapper to use pythonic_expr

- Added support for additional TIR expressions (FloorDiv, Min, Max, Add, Sub, FloorMod) in the pythonic_expr function to improve string representation.
- Replaced the deprecated legalize_c function calls in TLCUDASourceWrapper and TLCPUSourceWrapper with pythonic_expr for better expression handling in kernel launch code.

* [Refactor] Simplify expression handling in pythonic_expr function

- Consolidated binary and min/max operation handling in the pythonic_expr function to improve readability and maintainability.
- Replaced individual checks for binary operations with a mapping approach, streamlining the code and enhancing performance in expression representation.

* [Enhancement] Improve expression representation in pythonic_expr function

- Added operator precedence handling to the pythonic_expr function, enhancing the conversion of TVM PrimExpr to Python-style strings.
- Updated the visitor logic to intelligently add parentheses based on operator precedence, improving the accuracy of expression representation.
- Included a docstring for better clarity on the function's purpose and usage.

* test fix

* minor update

* 🐍

Fix the file name "test_exmaple_tilelang_nsa" (#629)

* [Enhancement] Add CPU utilization and count settings for Auto-Tuning (#630)

* [Enhancement] Add CPU utilization and count settings for Auto-Tuning

- Introduced environment variables for CPU utilization, counts, and maximum CPU count for auto-tuning.
- Updated the AutoTuner class to utilize these new settings, improving flexibility and performance in multi-threaded environments.
- Enhanced logging to provide better insights into the auto-tuning process based on the configured CPU settings.

* typo fix

* [AutoTune] Support `with set_autotune_inputs` to set auto tuning input tensors (#632)

* [Refactor] Simplify and modularize autotuner implementation

- Removed unused imports and extensive code sections from the autotuner module to enhance readability and maintainability.
- Modularized the code by introducing new imports for autotuning and capturing functionalities, streamlining the overall structure.
- Improved logging setup and removed redundant timeout handling functions, focusing on core autotuning logic.
- Updated the AutoTuner class to better utilize the new modular structure, ensuring efficient performance during auto-tuning processes.

* [Refactor] Clean up and enhance capture and tuner modules

- Improved code readability by removing unnecessary blank lines and organizing imports in `capture.py` and `tuner.py`.
- Enhanced logging in the `AutoTuner` class to provide clearer warnings regarding the usage of `supply_prog` in the context of auto-tuning.
- Streamlined the `CaptureStack` class for better thread-local context management.

* lint fix

* [Refactor] Simplify configuration and autotuning logic in blocksparse GEMM example

- Updated `get_configs` function to reduce the number of configurations, enhancing performance and clarity.
- Removed the `get_best_config` function, integrating its logic directly into the `blocksparse_matmul` function with the `@autotune` decorator for streamlined autotuning.
- Adjusted the main function to directly utilize the autotuned kernel, simplifying the overall structure and improving readability.
- Deleted obsolete test file for autotuning decorator, cleaning up the codebase.

* [Refactor] Improve code formatting and readability in autotune test file

- Reformatted the `matmul` function and `get_configs` function for better readability by adjusting line breaks and indentation.
- Fixed a typo in the `enable_rasteration` parameter name to ensure consistency.
- Cleaned up unnecessary blank lines to enhance overall code clarity.

* Update example_blocksparse_gemm.py

* Update capture.py

* [Pass] Introduce flag to diable cp async lowering (#633)

* [Enhancement] Update PipelinePlanner to support async copy configuration

- Modified the `Substitute` method in `PipelinePlanner` to accept a `use_async_copy` parameter, allowing for more flexible pipeline planning based on async copy requirements.
- Updated the constructor of `PipelinePlanner` to initialize the `use_async_copy_` member variable.
- Adjusted the logic in the pipeline planning process to conditionally apply async copy annotations based on the new parameter.
- Commented out the `LoopVectorizeDynamic` call in `LowerAndLegalize` to prevent unintended modifications during the legalizing phase.

* Refactor PipelinePlanning function for improved readability

- Adjusted the formatting of the `use_async_copy` variable assignment in the `PipelinePlanning` function to enhance code clarity and maintainability.

* fix typo (#635)

* [Pass][Simplify] Introduce symbolic level simplify for condition expression (#634)

* [Enhancement] Add argument simplification option to StmtSimplifier

- Introduced a new `simplify_arguments` flag in the `StmtSimplifier::Apply` method to control argument simplification behavior.
- Updated the `Simplify` function to accept the new flag, allowing for enhanced flexibility in the simplification process.
- Adjusted the `LowerAndLegalize` and `_Simplify` functions to utilize the new argument, ensuring consistent behavior across the codebase.
- Added comments to clarify the purpose of the new flag and its impact on simplification logic.

* lint fix

* [Enhancement] Improve layout inference and reduce operation handling

- Updated `ParallelOp::InferLayout` to check for pure buffer stores, enhancing layout inference logic.
- Modified `ReduceOp::Lower` to include all threads in the AllReduce operation, improving performance on specific architectures.
- Added a TODO comment in `AllReduce` to consider merging synchronization barriers for optimization.

* lint fix

* [Enhancement] Add input validation for GEMM parameters

- Introduced checks to ensure that the dimensions M and N are divisible by their respective warp sizes (kMPerWarp and kNPerWarp) in the Gemm::ComputeWarpPartition method.
- Added informative error messages to assist in debugging when the input parameters do not meet the required conditions.

* bug fix

* Enhance test coverage by adding LLVM requirement decorator to multiple function call tests. This ensures that tests for argument count, type code, null data pointer, and dimensionality checks are only executed when LLVM is available, improving test reliability and clarity.

* lint fix

* Fix software pipeline stage annotation and update optional config handling in StmtSimplifier

* Add Python executable detection in CMake configuration and update TVM submodule reference. Remove unused vectorization tests for improved clarity.

* Update TVM submodule reference and refactor FFI registration to use static initialization blocks for improved organization and clarity.

* Refactor attribute handling in layout and IR nodes to use reflection registration. This change replaces the VisitAttrs method with a RegisterReflection method for improved clarity and organization across multiple classes, including KernelLaunchFrameNode, WarpSpecializeFrameNode, LayoutNode, FragmentNode, and SwizzledLayoutNode.

* finish rebase

* tvm update

* Refactor FFI registration across tilelang modules to use the updated `tvm.ffi` namespace. This includes changes in various files to replace `tvm._ffi` with `tvm.ffi`, enhancing consistency and clarity in the codebase.

* lint fix

* Update TVM submodule reference and modify CUDA runtime argument handling to use the new runtime constants for improved clarity and consistency.

* lint fix

* Refactor tensor data type references from "e4m3_float8" and "e5m2_float8" to "float8_e4m3" and "float8_e5m2" across multiple files for consistency and clarity.

* lint fix

* Refactor forward_index initialization in Fragment class to default to an empty array instead of None, ensuring consistent handling of optional outputs.

* test fix

* lint fix

* bugfix

* lint fix

* reduce fix

* lint fix

* carver fix

* cast fix

* Update submodule and enhance kernel launch functionality with optional block size parameter; add device kernel launch transformation.

* lint fix

* bugfix

* Refactor test execution in test_tilelang_cpu_gemm.py and enhance device call checks in lower.py to exclude C packed functions from kernel launch conditions.

* lint fix

* Update runtime.cc

* phase out lisence

* Update subproject commit for TVM to 555cc71

* Update subproject commit for TVM to d39953fa

* Update subproject commit for TVM to 9574805f

* Update subproject commit for TVM to a08b7c3

* fix ci

* ci fix

---------
Signed-off-by: default avatardependabot[bot] <support@github.com>
Co-authored-by: default avatarLeiWang1999 <leiwang1999@outlook.com>
Co-authored-by: default avatarLei Wang <34334180+LeiWang1999@users.noreply.github.com>
Co-authored-by: default avatarCunxiao Ni <85601223+Cunxiao2002@users.noreply.github.com>
Co-authored-by: default avatarYuxi Chi <cherichy@outlook.com>
Co-authored-by: default avatarNathan Chen <120630832+Nathancgy@users.noreply.github.com>
Co-authored-by: default avatarbotbw <wang1570@e.ntu.edu.sg>
Co-authored-by: default avatardependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: default avatarxs-keju <93414213+xs-keju@users.noreply.github.com>
Co-authored-by: default avatarTong WU <109033598+Rachmanino@users.noreply.github.com>
Co-authored-by: default avatarKadir Nar <kadir.nar@hotmail.com>
Co-authored-by: default avatarYuqing Xia <35415939+xiayuqing0622@users.noreply.github.com>
Co-authored-by: default avatarxwhzz <wh.xie@outlook.com>

parent 8edd6941
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
#include "tvm/tir/expr.h" #include "tvm/tir/expr.h"
#include "tvm/tir/stmt.h" #include "tvm/tir/stmt.h"
#include <tvm/arith/analyzer.h> #include <tvm/arith/analyzer.h>
#include <tvm/runtime/registry.h> #include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/op.h> #include <tvm/tir/op.h>
#include <tvm/tir/stmt_functor.h> #include <tvm/tir/stmt_functor.h>
...@@ -209,8 +209,10 @@ tvm::transform::Pass LowerSharedBarrier() { ...@@ -209,8 +209,10 @@ tvm::transform::Pass LowerSharedBarrier() {
return CreatePrimFuncPass(pass_func, 0, "tl.LowerSharedBarrier", {}); return CreatePrimFuncPass(pass_func, 0, "tl.LowerSharedBarrier", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.LowerSharedBarrier") TVM_FFI_STATIC_INIT_BLOCK({
.set_body_typed(LowerSharedBarrier); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.LowerSharedBarrier", LowerSharedBarrier);
});
} // namespace transform } // namespace transform
} // namespace tl } // namespace tl
......
This diff is collapsed.
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* \brief Lower the tile op for further codegen. * \brief Lower the tile op for further codegen.
*/ */
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/stmt_functor.h> #include <tvm/tir/stmt_functor.h>
#include <tvm/tir/transform.h> #include <tvm/tir/transform.h>
...@@ -108,12 +109,14 @@ private: ...@@ -108,12 +109,14 @@ private:
* \return The rewritten block. * \return The rewritten block.
*/ */
Stmt RewritePaddingMap(const BlockNode *op) { Stmt RewritePaddingMap(const BlockNode *op) {
auto padding_map = auto padding_map = op->annotations.Get(attr::kPaddingMap);
op->annotations.Get(attr::kPaddingMap).as<Map<Var, PrimExpr>>().value(); if (!padding_map) {
LOG(FATAL) << "Padding map annotation is missing";
}
Map<Var, Var> var_remap = CreateVarRemap(); Map<Var, Var> var_remap = CreateVarRemap();
Map<Var, PrimExpr> new_padding_map = Map<Var, PrimExpr> new_padding_map = RemapPaddingMap(
RemapPaddingMap(padding_map, var_remap); Downcast<Map<Var, PrimExpr>>(padding_map.value()), var_remap);
auto block = Downcast<Block>(IRMutatorWithAnalyzer::VisitStmt_(op)); auto block = Downcast<Block>(IRMutatorWithAnalyzer::VisitStmt_(op));
auto block_ptr = block.CopyOnWrite(); auto block_ptr = block.CopyOnWrite();
...@@ -235,7 +238,7 @@ private: ...@@ -235,7 +238,7 @@ private:
} }
PrimExpr HandleAccessPtrAndOffset(PrimExpr access_ptr, PrimExpr HandleAccessPtrAndOffset(PrimExpr access_ptr,
Optional<PrimExpr> offset = NullOpt, Optional<PrimExpr> offset = std::nullopt,
DataType dtype = DataType::Int(32)) { DataType dtype = DataType::Int(32)) {
// The 2th arg of T.tvm_access_ptr call is offset, we set it to 0 and // The 2th arg of T.tvm_access_ptr call is offset, we set it to 0 and
// accumulate it to smem_offset // accumulate it to smem_offset
...@@ -318,7 +321,7 @@ private: ...@@ -318,7 +321,7 @@ private:
op->op.same_as(tl::tma_store()))) { op->op.same_as(tl::tma_store()))) {
has_tma_ = true; has_tma_ = true;
} }
Array<RelayExpr> ptx_instructions = {builtin::ptx_ldmatrix(), Array<RelaxExpr> ptx_instructions = {builtin::ptx_ldmatrix(),
builtin::mma_store()}; builtin::mma_store()};
if (std::find(ptx_instructions.begin(), ptx_instructions.end(), op->op) == if (std::find(ptx_instructions.begin(), ptx_instructions.end(), op->op) ==
...@@ -354,7 +357,7 @@ private: ...@@ -354,7 +357,7 @@ private:
// mma_store now // mma_store now
auto access_ptr = call->args[2]; auto access_ptr = call->args[2];
auto new_access_ptr = auto new_access_ptr =
HandleAccessPtrAndOffset(access_ptr, NullOpt, call->dtype); HandleAccessPtrAndOffset(access_ptr, std::nullopt, call->dtype);
auto new_call = call.CopyOnWrite(); auto new_call = call.CopyOnWrite();
new_call->args.Set(2, new_access_ptr); new_call->args.Set(2, new_access_ptr);
} else { } else {
...@@ -496,7 +499,10 @@ tvm::transform::Pass LowerTileOp() { ...@@ -496,7 +499,10 @@ tvm::transform::Pass LowerTileOp() {
return CreatePrimFuncPass(pass_func, 0, "tl.LowerTileOp", {}); return CreatePrimFuncPass(pass_func, 0, "tl.LowerTileOp", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.LowerTileOp").set_body_typed(LowerTileOp); TVM_FFI_STATIC_INIT_BLOCK({
namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.LowerTileOp", LowerTileOp);
});
} // namespace transform } // namespace transform
} // namespace tl } // namespace tl
......
...@@ -20,8 +20,10 @@ ...@@ -20,8 +20,10 @@
/*! /*!
* \file make_packed_api.cc Lower PrimFunc to use the packed function API. * \file make_packed_api.cc Lower PrimFunc to use the packed function API.
*/ */
#include <tvm/ffi/function.h>
#include <tvm/ffi/reflection/registry.h>
#include <tvm/runtime/device_api.h> #include <tvm/runtime/device_api.h>
#include <tvm/runtime/registry.h> #include <tvm/runtime/module.h>
#include <tvm/target/target.h> #include <tvm/target/target.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/buffer.h> #include <tvm/tir/buffer.h>
...@@ -30,7 +32,6 @@ ...@@ -30,7 +32,6 @@
#include <tvm/tir/stmt_functor.h> #include <tvm/tir/stmt_functor.h>
#include <tvm/tir/transform.h> #include <tvm/tir/transform.h>
#include <unordered_set>
#include <utility> #include <utility>
#include <vector> #include <vector>
...@@ -75,7 +76,7 @@ public: ...@@ -75,7 +76,7 @@ public:
private: private:
struct ConvertedInfo { struct ConvertedInfo {
int tcode{-1}; int type_index{-1};
PrimExpr expr; PrimExpr expr;
Buffer dummy_val_buffer; Buffer dummy_val_buffer;
Buffer dummy_tcode_buffer; Buffer dummy_tcode_buffer;
...@@ -87,13 +88,13 @@ private: ...@@ -87,13 +88,13 @@ private:
// convert val's data type to FFI data type, return type code // convert val's data type to FFI data type, return type code
DataType dtype = val.dtype(); DataType dtype = val.dtype();
if (dtype.is_int() || dtype.is_uint()) { if (dtype.is_int() || dtype.is_uint()) {
info.tcode = kTVMArgInt; info.type_index = ffi::TypeIndex::kTVMFFIInt;
info.expr = Cast(DataType::Int(64), val); info.expr = Cast(DataType::Int(64), val);
} else if (dtype.is_float()) { } else if (dtype.is_float()) {
info.tcode = kTVMArgFloat; info.type_index = ffi::TypeIndex::kTVMFFIFloat;
info.expr = Cast(DataType::Float(64), val); info.expr = Cast(DataType::Float(64), val);
} else if (dtype.is_void()) { } else if (dtype.is_void()) {
info.tcode = kTVMNullptr; info.type_index = ffi::TypeIndex::kTVMFFINone;
info.expr = val; info.expr = val;
} else { } else {
LOG(FATAL) << "data type " << dtype << " not supported yet"; LOG(FATAL) << "data type " << dtype << " not supported yet";
...@@ -101,18 +102,18 @@ private: ...@@ -101,18 +102,18 @@ private:
// If multiple return locations have the same data type, use the // If multiple return locations have the same data type, use the
// same dummy buffer declaration. // same dummy buffer declaration.
auto it = dummy_val_buffer_map_.find(info.tcode); auto it = dummy_val_buffer_map_.find(info.type_index);
if (it != dummy_val_buffer_map_.end()) { if (it != dummy_val_buffer_map_.end()) {
info.dummy_val_buffer = it->second; info.dummy_val_buffer = it->second;
} else { } else {
info.dummy_val_buffer = info.dummy_val_buffer =
Buffer(ret_var_, info.expr.dtype(), {1}, {1}, ConstInt32(0), Buffer(ret_var_, info.expr.dtype(), {1}, {1}, ConstInt32(0),
ret_var_->name_hint, 0, 0, kDefault); ret_var_->name_hint, 0, 0, kDefault);
dummy_val_buffer_map_[info.tcode] = info.dummy_val_buffer; dummy_val_buffer_map_[info.type_index] = info.dummy_val_buffer;
} }
// The tcode is always a 32-bit int, so we don't need to have a separate // The type_index is always a 32-bit int, so we don't need to have a
// map. // separate map.
if (!dummy_tcode_buffer_.defined()) { if (!dummy_tcode_buffer_.defined()) {
dummy_tcode_buffer_ = dummy_tcode_buffer_ =
Buffer(ret_tcode_, DataType::Int(32), {1}, {1}, ConstInt32(0), Buffer(ret_tcode_, DataType::Int(32), {1}, {1}, ConstInt32(0),
...@@ -126,7 +127,8 @@ private: ...@@ -126,7 +127,8 @@ private:
Stmt WriteToOut(PrimExpr val) { Stmt WriteToOut(PrimExpr val) {
auto info = ConvertForFFI(val); auto info = ConvertForFFI(val);
Stmt store_val = BufferStore(info.dummy_val_buffer, info.expr, {0}); Stmt store_val = BufferStore(info.dummy_val_buffer, info.expr, {0});
Stmt store_tcode = BufferStore(info.dummy_tcode_buffer, info.tcode, {0}); Stmt store_tcode =
BufferStore(info.dummy_tcode_buffer, info.type_index, {0});
Stmt ret_zero = Evaluate(tvm::ret(0)); Stmt ret_zero = Evaluate(tvm::ret(0));
return SeqStmt({store_val, store_tcode, ret_zero}); return SeqStmt({store_val, store_tcode, ret_zero});
} }
...@@ -153,7 +155,7 @@ public: ...@@ -153,7 +155,7 @@ public:
if (rewriter.made_change_) { if (rewriter.made_change_) {
return stmt; return stmt;
} else { } else {
return NullOpt; return std::nullopt;
} }
} }
...@@ -204,21 +206,21 @@ inline Stmt MakeAssertNotNull(PrimExpr ptr, std::string msg) { ...@@ -204,21 +206,21 @@ inline Stmt MakeAssertNotNull(PrimExpr ptr, std::string msg) {
* \param func The function to be inspected * \param func The function to be inspected
* *
* \returns The global_symbol to be used for the function at call * \returns The global_symbol to be used for the function at call
* sites, or NullOpt if the function is to remain unchanged. * sites, or std::nullopt if the function is to remain unchanged.
*/ */
Optional<String> RequiresPackedAPI(const PrimFunc &func) { Optional<String> RequiresPackedAPI(const PrimFunc &func) {
// A function with an explicit calling convention has already been // A function with an explicit calling convention has already been
// lowered, and should not be modified. // lowered, and should not be modified.
if (auto opt = func->GetAttr<Integer>(tvm::attr::kCallingConv)) { if (auto opt = func->GetAttr<Integer>(tvm::attr::kCallingConv)) {
if (CallingConv(opt.value()->value) != CallingConv::kDefault) { if (CallingConv(opt.value()->value) != CallingConv::kDefault) {
return NullOpt; return std::nullopt;
} }
} }
// Internal function calls do not need the PackedFunc API // Internal function calls do not need the PackedFunc API
auto global_symbol = func->GetAttr<String>(tvm::attr::kGlobalSymbol); auto global_symbol = func->GetAttr<String>(tvm::attr::kGlobalSymbol);
if (!global_symbol.defined()) { if (!global_symbol.defined()) {
return NullOpt; return std::nullopt;
} }
return global_symbol; return global_symbol;
...@@ -344,9 +346,9 @@ PrimFunc MakePackedAPI(PrimFunc func) { ...@@ -344,9 +346,9 @@ PrimFunc MakePackedAPI(PrimFunc func) {
} }
// type code checks // type code checks
Var tcode(param->name_hint + ".code", DataType::Int(32)); Var type_index(param->name_hint + ".code", DataType::Int(32));
seq_init.emplace_back(LetStmt( seq_init.emplace_back(LetStmt(
tcode, type_index,
BufferLoad(buf_packed_arg_type_ids, {IntImm(DataType::Int(32), i)}), BufferLoad(buf_packed_arg_type_ids, {IntImm(DataType::Int(32), i)}),
nop)); nop));
DataType t = param.dtype(); DataType t = param.dtype();
...@@ -354,20 +356,22 @@ PrimFunc MakePackedAPI(PrimFunc func) { ...@@ -354,20 +356,22 @@ PrimFunc MakePackedAPI(PrimFunc func) {
std::ostringstream msg; std::ostringstream msg;
msg << name_hint << ": Expect arg[" << i << "] to be pointer"; msg << name_hint << ": Expect arg[" << i << "] to be pointer";
seq_init.emplace_back( seq_init.emplace_back(
AssertStmt(tcode == kTVMOpaqueHandle || tcode == kTVMNDArrayHandle || AssertStmt(type_index == ffi::TypeIndex::kTVMFFINone ||
tcode == kTVMDLTensorHandle || tcode == kTVMNullptr, type_index == ffi::TypeIndex::kTVMFFIOpaquePtr ||
type_index == ffi::TypeIndex::kTVMFFIDLTensorPtr ||
type_index >= ffi::TypeIndex::kTVMFFIStaticObjectBegin,
tvm::tir::StringImm(msg.str()), nop)); tvm::tir::StringImm(msg.str()), nop));
} else if (t.is_int() || t.is_uint()) { } else if (t.is_int() || t.is_uint()) {
std::ostringstream msg; std::ostringstream msg;
msg << name_hint << ": Expect arg[" << i << "] to be int"; msg << name_hint << ": Expect arg[" << i << "] to be int";
seq_init.emplace_back( seq_init.emplace_back(AssertStmt(type_index == kDLInt,
AssertStmt(tcode == kDLInt, tvm::tir::StringImm(msg.str()), nop)); tvm::tir::StringImm(msg.str()), nop));
} else { } else {
ICHECK(t.is_float()); ICHECK(t.is_float());
std::ostringstream msg; std::ostringstream msg;
msg << name_hint << ": Expect arg[" << i << "] to be float"; msg << name_hint << ": Expect arg[" << i << "] to be float";
seq_init.emplace_back( seq_init.emplace_back(AssertStmt(type_index == kDLFloat,
AssertStmt(tcode == kDLFloat, tvm::tir::StringImm(msg.str()), nop)); tvm::tir::StringImm(msg.str()), nop));
} }
} }
...@@ -406,13 +410,7 @@ PrimFunc MakePackedAPI(PrimFunc func) { ...@@ -406,13 +410,7 @@ PrimFunc MakePackedAPI(PrimFunc func) {
seq_check.push_back( seq_check.push_back(
AttrStmt(node, tir::attr::device_type, device_type, nop)); AttrStmt(node, tir::attr::device_type, device_type, nop));
bool need_set_device = if (runtime::DeviceAPI::NeedSetDevice(target_device_type)) {
(target_device_type != kDLMicroDev &&
(
// or is c source target
target_device_type != kDLCPU || target->kind->name != "llvm"));
if (need_set_device) {
Stmt set_device = Stmt set_device =
Evaluate(Call(DataType::Int(32), builtin::tvm_call_packed(), Evaluate(Call(DataType::Int(32), builtin::tvm_call_packed(),
{StringImm(runtime::symbol::tvm_set_device), {StringImm(runtime::symbol::tvm_set_device),
...@@ -468,7 +466,6 @@ PrimFunc MakePackedAPI(PrimFunc func) { ...@@ -468,7 +466,6 @@ PrimFunc MakePackedAPI(PrimFunc func) {
<< " are used, but are not passed in as API arguments"; << " are used, but are not passed in as API arguments";
func_ptr->buffer_map = Map<Var, Buffer>(); func_ptr->buffer_map = Map<Var, Buffer>();
func_ptr->checked_type_ = func_ptr->func_type_annotation();
func_ptr->ret_type = PrimType(DataType::Int(32)); // return the function. func_ptr->ret_type = PrimType(DataType::Int(32)); // return the function.
return func; return func;
} }
...@@ -516,8 +513,10 @@ tvm::transform::Pass MakePackedAPI() { ...@@ -516,8 +513,10 @@ tvm::transform::Pass MakePackedAPI() {
return tvm::transform::CreateModulePass(pass_func, 0, "tl.MakePackedAPI", {}); return tvm::transform::CreateModulePass(pass_func, 0, "tl.MakePackedAPI", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.MakePackedAPI").set_body_typed([]() { TVM_FFI_STATIC_INIT_BLOCK({
return MakePackedAPI(); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.MakePackedAPI",
[]() { return MakePackedAPI(); });
}); });
} // namespace tl } // namespace tl
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* \brief Merge the If Stmt in SeqStmt * \brief Merge the If Stmt in SeqStmt
*/ */
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/op.h> #include <tvm/tir/op.h>
...@@ -91,7 +92,10 @@ tvm::transform::Pass MergeIfStmt() { ...@@ -91,7 +92,10 @@ tvm::transform::Pass MergeIfStmt() {
return CreatePrimFuncPass(pass_func, 0, "tl.MergeIfStmt", {}); return CreatePrimFuncPass(pass_func, 0, "tl.MergeIfStmt", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.MergeIfStmt").set_body_typed(MergeIfStmt); TVM_FFI_STATIC_INIT_BLOCK({
namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.MergeIfStmt", MergeIfStmt);
});
} // namespace tl } // namespace tl
} // namespace tvm } // namespace tvm
...@@ -23,8 +23,9 @@ ...@@ -23,8 +23,9 @@
* memory allocation. This pass merges multiple TIR-level dynamic or static * memory allocation. This pass merges multiple TIR-level dynamic or static
* shared memory allocations into one allocation. * shared memory allocations into one allocation.
*/ */
#include <tvm/ffi/function.h>
#include <tvm/ffi/reflection/registry.h>
#include <tvm/runtime/logging.h> #include <tvm/runtime/logging.h>
#include <tvm/runtime/registry.h>
#include <tvm/tir/expr.h> #include <tvm/tir/expr.h>
#include <tvm/tir/op.h> #include <tvm/tir/op.h>
#include <tvm/tir/stmt_functor.h> #include <tvm/tir/stmt_functor.h>
...@@ -1048,8 +1049,11 @@ Pass MergeSharedMemoryAllocations(bool enable_aggressive_merge = false, ...@@ -1048,8 +1049,11 @@ Pass MergeSharedMemoryAllocations(bool enable_aggressive_merge = false,
{}); {});
} }
TVM_REGISTER_GLOBAL("tl.transform.MergeSharedMemoryAllocations") TVM_FFI_STATIC_INIT_BLOCK({
.set_body_typed(MergeSharedMemoryAllocations); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.MergeSharedMemoryAllocations",
MergeSharedMemoryAllocations);
});
} // namespace transform } // namespace transform
} // namespace tl } // namespace tl
......
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
/*! /*!
* \file warp_specialized_pipeline.cc * \file warp_specialized_pipeline.cc
* \brief Warp specialized Pipeline for cuda GPU (sm90+) * \brief Warp specialized Pipeline for cuda GPU (sm90+)
*/ */
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/op.h> #include <tvm/tir/op.h>
...@@ -220,14 +202,14 @@ private: ...@@ -220,14 +202,14 @@ private:
Stmt VisitStmt_(const ForNode *op) final { Stmt VisitStmt_(const ForNode *op) final {
loop_stack_.emplace_back(op->loop_var, op->extent); loop_stack_.emplace_back(op->loop_var, op->extent);
auto num_stages_anno = op->annotations.Get("num_stages"); auto num_stages_anno = op->annotations.Get("num_stages");
if (!num_stages_anno.defined()) { if (!num_stages_anno) {
auto for_node = StmtExprMutator::VisitStmt_(op); auto for_node = StmtExprMutator::VisitStmt_(op);
loop_stack_.pop_back(); loop_stack_.pop_back();
return for_node; return for_node;
} }
ICHECK(num_stages_anno.as<IntImmNode>()); ICHECK(num_stages_anno->as<IntImmNode>());
int num_stages = static_cast<int>(num_stages_anno.as<IntImmNode>()->value); int num_stages = static_cast<int>(num_stages_anno->as<IntImmNode>()->value);
const SeqStmtNode *pipeline_body_seq = op->body.as<SeqStmtNode>(); const SeqStmtNode *pipeline_body_seq = op->body.as<SeqStmtNode>();
CHECK(pipeline_body_seq) << "ValueError: The body of the software pipeline " CHECK(pipeline_body_seq) << "ValueError: The body of the software pipeline "
...@@ -340,8 +322,10 @@ tvm::transform::Pass MultiVersionBuffer() { ...@@ -340,8 +322,10 @@ tvm::transform::Pass MultiVersionBuffer() {
return CreatePrimFuncPass(pass_func, 0, "tl.MultiVersionBuffer", {}); return CreatePrimFuncPass(pass_func, 0, "tl.MultiVersionBuffer", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.MultiVersionBuffer") TVM_FFI_STATIC_INIT_BLOCK({
.set_body_typed(MultiVersionBuffer); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.MultiVersionBuffer", MultiVersionBuffer);
});
} // namespace tl } // namespace tl
} // namespace tvm } // namespace tvm
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
* \brief Lower L2 persistent annotation * \brief Lower L2 persistent annotation
*/ */
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/stmt_functor.h> #include <tvm/tir/stmt_functor.h>
...@@ -59,8 +60,10 @@ tvm::transform::Pass PersistThreadblock() { ...@@ -59,8 +60,10 @@ tvm::transform::Pass PersistThreadblock() {
return CreatePrimFuncPass(pass_func, 0, "tl.PersistThreadblock", {}); return CreatePrimFuncPass(pass_func, 0, "tl.PersistThreadblock", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.PersistThreadblock") TVM_FFI_STATIC_INIT_BLOCK({
.set_body_typed(PersistThreadblock); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.PersistThreadblock", PersistThreadblock);
});
} // namespace tl } // namespace tl
} // namespace tvm } // namespace tvm
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
/*!
* \file pipeline_planning.cc
* \brief Plan the software pipeline
*/
#include <tvm/arith/analyzer.h> #include <tvm/arith/analyzer.h>
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/stmt_functor.h> #include <tvm/tir/stmt_functor.h>
...@@ -224,12 +201,12 @@ private: ...@@ -224,12 +201,12 @@ private:
auto order_anno = loop->annotations.Get("tl_pipeline_order"); auto order_anno = loop->annotations.Get("tl_pipeline_order");
auto stage_anno = loop->annotations.Get("tl_pipeline_stage"); auto stage_anno = loop->annotations.Get("tl_pipeline_stage");
auto num_stages_anno = loop->annotations.Get("num_stages"); auto num_stages_anno = loop->annotations.Get("num_stages");
if (order_anno.defined() && stage_anno.defined()) { if (order_anno && stage_anno) {
// Check if order_anno or stage_anno contains -1, which means TMA+WS is // Check if order_anno or stage_anno contains -1, which means TMA+WS is
// enabled // enabled
bool ws_tma_enabled = false; bool ws_tma_enabled = false;
auto order_array = Downcast<Array<Integer>>(order_anno); auto order_array = Downcast<Array<Integer>>(order_anno.value());
auto stage_array = Downcast<Array<Integer>>(stage_anno); auto stage_array = Downcast<Array<Integer>>(stage_anno.value());
for (const auto &val : order_array) { for (const auto &val : order_array) {
if (val->value == -1) { if (val->value == -1) {
ws_tma_enabled = true; ws_tma_enabled = true;
...@@ -249,20 +226,20 @@ private: ...@@ -249,20 +226,20 @@ private:
return StmtExprMutator::VisitStmt_(loop); return StmtExprMutator::VisitStmt_(loop);
} }
Map<String, ObjectRef> annotations; Map<String, Any> annotations;
for (const auto &[key, value] : loop->annotations) { for (const auto &[key, value] : loop->annotations) {
if (key != "tl_pipeline_order") { if (key != "tl_pipeline_order") {
annotations.Set(key, value); annotations.Set(key, value);
} }
} }
annotations.Set(tir::attr::software_pipeline_order, order_anno); annotations.Set(tir::attr::software_pipeline_order, order_anno.value());
for (const auto &[key, value] : loop->annotations) { for (const auto &[key, value] : loop->annotations) {
if (key != "tl_pipeline_stage") { if (key != "tl_pipeline_stage") {
annotations.Set(key, value); annotations.Set(key, value);
} }
} }
annotations.Set(tir::attr::software_pipeline_stage, stage_anno); annotations.Set(tir::attr::software_pipeline_stage, stage_anno.value());
if (TargetHasAsyncCopy(target_) && use_async_copy_) if (TargetHasAsyncCopy(target_) && use_async_copy_)
annotations.Set(tir::attr::software_pipeline_async_stages, annotations.Set(tir::attr::software_pipeline_async_stages,
Array<Integer>{0}); Array<Integer>{0});
...@@ -271,9 +248,9 @@ private: ...@@ -271,9 +248,9 @@ private:
return for_node; return for_node;
} }
if (!num_stages_anno.defined()) if (!num_stages_anno)
return StmtExprMutator::VisitStmt_(loop); return StmtExprMutator::VisitStmt_(loop);
int num_stages = num_stages_anno.as<IntImmNode>()->value; int num_stages = num_stages_anno->as<IntImmNode>()->value;
Stmt pipeline_body{nullptr}; Stmt pipeline_body{nullptr};
if (const auto *realize = loop->body.as<BlockRealizeNode>()) { if (const auto *realize = loop->body.as<BlockRealizeNode>()) {
const auto &block = realize->block; const auto &block = realize->block;
...@@ -443,7 +420,7 @@ private: ...@@ -443,7 +420,7 @@ private:
} }
// Finally, make the pipeline annotation // Finally, make the pipeline annotation
Map<String, ObjectRef> annotations; Map<String, Any> annotations;
for (const auto &[key, value] : loop->annotations) { for (const auto &[key, value] : loop->annotations) {
if (key != "num_stages") { if (key != "num_stages") {
annotations.Set(key, value); annotations.Set(key, value);
...@@ -496,8 +473,10 @@ tvm::transform::Pass PipelinePlanning() { ...@@ -496,8 +473,10 @@ tvm::transform::Pass PipelinePlanning() {
return CreatePrimFuncPass(pass_func, 0, "tl.PipelinePlanning", {}); return CreatePrimFuncPass(pass_func, 0, "tl.PipelinePlanning", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.PipelinePlanning") TVM_FFI_STATIC_INIT_BLOCK({
.set_body_typed(PipelinePlanning); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.PipelinePlanning", PipelinePlanning);
});
} // namespace tl } // namespace tl
} // namespace tvm } // namespace tvm
/*! /*!
* \file simplify.cc * \file simplify.cc
* \brief Remove useless parameters of TL PrimFunc. * \brief Statement simplifier based on analyzer and remove useless parameters
* of TL PrimFunc.
*/ */
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/buffer.h> #include <tvm/tir/buffer.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/stmt_functor.h> #include <tvm/tir/stmt_functor.h>
...@@ -19,39 +21,45 @@ namespace tl { ...@@ -19,39 +21,45 @@ namespace tl {
using namespace tir; using namespace tir;
using namespace arith; using namespace arith;
struct SimplifyConfigNode : public tvm::AttrsNode<SimplifyConfigNode> { struct SimplifyConfigNode : public AttrsNodeReflAdapter<SimplifyConfigNode> {
bool transitively_prove_inequalities; bool transitively_prove_inequalities;
bool propagate_knowns_to_prove_conditional; bool propagate_knowns_to_prove_conditional;
bool propagate_knowns_to_simplify_expressions; bool propagate_knowns_to_simplify_expressions;
bool convert_boolean_to_and_of_ors; bool convert_boolean_to_and_of_ors;
bool apply_constraints_to_boolean_branches; bool apply_constraints_to_boolean_branches;
TVM_DECLARE_ATTRS(SimplifyConfigNode, "tl.transform.SimplifyConfig") { static void RegisterReflection() {
TVM_ATTR_FIELD(transitively_prove_inequalities) namespace refl = tvm::ffi::reflection;
.describe("If true, simplify conditionals with transitive combinations " refl::ObjectDef<SimplifyConfigNode>()
"of scoped constraints") .def_ro("transitively_prove_inequalities",
.set_default(false); &SimplifyConfigNode::transitively_prove_inequalities,
"If true, simplify conditionals with transitive combinations "
TVM_ATTR_FIELD(propagate_knowns_to_prove_conditional) "of scoped constraints",
.describe("If true, known buffer values are propagated and used to " refl::DefaultValue(false))
"statically prove conditionals") .def_ro("propagate_knowns_to_prove_conditional",
.set_default(false); &SimplifyConfigNode::propagate_knowns_to_prove_conditional,
"If true, known buffer values are propagated and used to "
TVM_ATTR_FIELD(propagate_knowns_to_simplify_expressions) "statically prove conditionals",
.describe("If true, known buffer values are propagated and used to " refl::DefaultValue(false))
"replace BufferLoad wherever " .def_ro("propagate_knowns_to_simplify_expressions",
"possible") &SimplifyConfigNode::propagate_knowns_to_simplify_expressions,
.set_default(false); "If true, known buffer values are propagated and used to "
"replace BufferLoad wherever "
TVM_ATTR_FIELD(convert_boolean_to_and_of_ors) "possible",
.describe("If true, simplify conditionals into an AND of ORs") refl::DefaultValue(false))
.set_default(false); .def_ro("convert_boolean_to_and_of_ors",
&SimplifyConfigNode::convert_boolean_to_and_of_ors,
TVM_ATTR_FIELD(apply_constraints_to_boolean_branches) "If true, simplify conditionals into an AND of ORs",
.describe("If true, simplify each branch of AND/OR " refl::DefaultValue(false))
"under a constraints provided by the other branch") .def_ro("apply_constraints_to_boolean_branches",
.set_default(false); &SimplifyConfigNode::apply_constraints_to_boolean_branches,
"If true, simplify each branch of AND/OR under a constraints "
"provided by the other "
"branch",
refl::DefaultValue(false));
} }
static constexpr const char *_type_key = "tl.transform.SimplifyConfig";
TVM_FFI_DECLARE_FINAL_OBJECT_INFO(SimplifyConfigNode, BaseAttrsNode);
RewriteSimplifier::Extension GetEnabledExtensions() const { RewriteSimplifier::Extension GetEnabledExtensions() const {
RewriteSimplifier::Extension flags = RewriteSimplifier::kNone; RewriteSimplifier::Extension flags = RewriteSimplifier::kNone;
...@@ -200,6 +208,7 @@ public: ...@@ -200,6 +208,7 @@ public:
TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(SimplifyConfig, Attrs, TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(SimplifyConfig, Attrs,
SimplifyConfigNode); SimplifyConfigNode);
}; };
TVM_FFI_STATIC_INIT_BLOCK({ SimplifyConfigNode::RegisterReflection(); });
TVM_REGISTER_NODE_TYPE(SimplifyConfigNode); TVM_REGISTER_NODE_TYPE(SimplifyConfigNode);
TVM_REGISTER_PASS_CONFIG_OPTION("tl.Simplify", SimplifyConfig); TVM_REGISTER_PASS_CONFIG_OPTION("tl.Simplify", SimplifyConfig);
...@@ -207,7 +216,7 @@ TVM_REGISTER_PASS_CONFIG_OPTION("tl.Simplify", SimplifyConfig); ...@@ -207,7 +216,7 @@ TVM_REGISTER_PASS_CONFIG_OPTION("tl.Simplify", SimplifyConfig);
class StmtSimplifier : public IRMutatorWithAnalyzer { class StmtSimplifier : public IRMutatorWithAnalyzer {
public: public:
static PrimFunc Apply(PrimFunc func, Analyzer *analyzer, static PrimFunc Apply(PrimFunc func, Analyzer *analyzer,
Optional<SimplifyConfig> config_opt = NullOpt, Optional<SimplifyConfig> config_opt = std::nullopt,
bool simplify_arguments = false) { bool simplify_arguments = false) {
auto config = config_opt.value_or(AttrsWithDefaultValues<SimplifyConfig>()); auto config = config_opt.value_or(AttrsWithDefaultValues<SimplifyConfig>());
analyzer->rewrite_simplify.SetEnabledExtensions( analyzer->rewrite_simplify.SetEnabledExtensions(
...@@ -229,6 +238,7 @@ public: ...@@ -229,6 +238,7 @@ public:
// Begin to remove useless var and buffer // Begin to remove useless var and buffer
// First get used buffers // First get used buffers
simplifier.used_buffers_ = CollectUsedBuffers(func); simplifier.used_buffers_ = CollectUsedBuffers(func);
bool param_updated = false; bool param_updated = false;
Array<Var> new_params; Array<Var> new_params;
Map<Var, Buffer> new_buffer_map; Map<Var, Buffer> new_buffer_map;
...@@ -239,13 +249,18 @@ public: ...@@ -239,13 +249,18 @@ public:
simplifier.used_buffers_.end()) { simplifier.used_buffers_.end()) {
new_params.push_back(var); new_params.push_back(var);
new_buffer_map.Set(var, func->buffer_map[var]); new_buffer_map.Set(var, func->buffer_map[var]);
} else if (simplifier.used_in_buffer_def_.find(
func->buffer_map[var]->data.get()) !=
simplifier.used_in_buffer_def_.end()) {
new_params.push_back(var);
new_buffer_map.Set(var, func->buffer_map[var]);
} else { } else {
param_updated = true; param_updated = true;
} }
} }
} }
if (simplify_arguments && param_updated) { if (param_updated) {
return PrimFunc(new_params, func.CopyOnWrite()->body, func->ret_type, return PrimFunc(new_params, func.CopyOnWrite()->body, func->ret_type,
new_buffer_map, func->attrs, func->span); new_buffer_map, func->attrs, func->span);
} else { } else {
...@@ -444,7 +459,7 @@ private: ...@@ -444,7 +459,7 @@ private:
arith::ProofStrength::kSymbolicBound)) { arith::ProofStrength::kSymbolicBound)) {
return Bool(true); return Bool(true);
} }
return NullOpt; return std::nullopt;
} }
} }
...@@ -452,7 +467,7 @@ private: ...@@ -452,7 +467,7 @@ private:
std::optional<ControlFlowGraph> touch_pattern_; std::optional<ControlFlowGraph> touch_pattern_;
Map<Var, PrimExpr> non_inlined_bindings_; Map<Var, PrimExpr> non_inlined_bindings_;
Optional<Stmt> current_stmt_{NullOpt}; Optional<Stmt> current_stmt_{std::nullopt};
std::unordered_set<const VarNode *> used_in_buffer_def_; std::unordered_set<const VarNode *> used_in_buffer_def_;
std::unordered_set<const VarNode *> used_vars_; std::unordered_set<const VarNode *> used_vars_;
std::unordered_set<const BufferNode *> used_buffers_; std::unordered_set<const BufferNode *> used_buffers_;
...@@ -469,7 +484,10 @@ tvm::transform::Pass Simplify(bool simplify_arguments = true) { ...@@ -469,7 +484,10 @@ tvm::transform::Pass Simplify(bool simplify_arguments = true) {
return CreatePrimFuncPass(pass_func, 0, "tl.Simplify", {}); return CreatePrimFuncPass(pass_func, 0, "tl.Simplify", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.Simplify").set_body_typed(Simplify); TVM_FFI_STATIC_INIT_BLOCK({
namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.Simplify", Simplify);
});
} // namespace tl } // namespace tl
} // namespace tvm } // namespace tvm
This diff is collapsed.
/*! /*!
* \file thread_storage_sync.cc * \file thread_storage_sync.cc
*/ */
#include <tvm/runtime/registry.h> #include <tvm/ffi/function.h>
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/expr.h> #include <tvm/tir/expr.h>
...@@ -269,7 +270,7 @@ private: ...@@ -269,7 +270,7 @@ private:
scope_.pop_back(); scope_.pop_back();
s.access.insert(s.access.end(), v.begin(), v.end()); s.access.insert(s.access.end(), v.begin(), v.end());
num_partial_threads_ = NullOpt; num_partial_threads_ = std::nullopt;
} else { } else {
TileLangStorageAccessVisitor::VisitStmt_(op); TileLangStorageAccessVisitor::VisitStmt_(op);
} }
...@@ -371,8 +372,11 @@ Pass TileLangThreadPartialSync(String storage_scope) { ...@@ -371,8 +372,11 @@ Pass TileLangThreadPartialSync(String storage_scope) {
return CreatePrimFuncPass(pass_func, 0, "tl.ThreadPartialSync", {}); return CreatePrimFuncPass(pass_func, 0, "tl.ThreadPartialSync", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.ThreadPartialSync") TVM_FFI_STATIC_INIT_BLOCK({
.set_body_typed(TileLangThreadPartialSync); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.ThreadPartialSync",
TileLangThreadPartialSync);
});
} // namespace transform } // namespace transform
} // namespace tl } // namespace tl
......
...@@ -20,7 +20,8 @@ ...@@ -20,7 +20,8 @@
/*! /*!
* \file thread_storage_sync.cc * \file thread_storage_sync.cc
*/ */
#include <tvm/runtime/registry.h> #include <tvm/ffi/function.h>
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/expr.h> #include <tvm/tir/expr.h>
...@@ -367,7 +368,7 @@ private: ...@@ -367,7 +368,7 @@ private:
scope_.pop_back(); scope_.pop_back();
s.access.insert(s.access.end(), v.begin(), v.end()); s.access.insert(s.access.end(), v.begin(), v.end());
num_partial_threads_ = NullOpt; num_partial_threads_ = std::nullopt;
} else { } else {
TileLangStorageAccessVisitor::VisitStmt_(op); TileLangStorageAccessVisitor::VisitStmt_(op);
} }
...@@ -786,7 +787,10 @@ tvm::transform::Pass ThreadSync(String storage_scope) { ...@@ -786,7 +787,10 @@ tvm::transform::Pass ThreadSync(String storage_scope) {
return CreatePrimFuncPass(pass_func, 0, "tl.ThreadSync", {}); return CreatePrimFuncPass(pass_func, 0, "tl.ThreadSync", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.ThreadSync").set_body_typed(ThreadSync); TVM_FFI_STATIC_INIT_BLOCK({
namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.ThreadSync", ThreadSync);
});
} // namespace transform } // namespace transform
} // namespace tl } // namespace tl
......
...@@ -22,7 +22,8 @@ ...@@ -22,7 +22,8 @@
*/ */
// Loop vectorizer as in Halide pipeline. // Loop vectorizer as in Halide pipeline.
#include <tvm/arith/analyzer.h> #include <tvm/arith/analyzer.h>
#include <tvm/runtime/registry.h> #include <tvm/ffi/function.h>
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/expr.h> #include <tvm/tir/expr.h>
...@@ -631,7 +632,7 @@ public: ...@@ -631,7 +632,7 @@ public:
return Scalarize(GetRef<Stmt>(op)); return Scalarize(GetRef<Stmt>(op));
} }
Stmt then_case = this->VisitStmt(op->then_case); Stmt then_case = this->VisitStmt(op->then_case);
Optional<Stmt> else_case = NullOpt; Optional<Stmt> else_case = std::nullopt;
if (op->else_case) { if (op->else_case) {
else_case = this->VisitStmt(op->else_case.value()); else_case = this->VisitStmt(op->else_case.value());
} }
...@@ -688,10 +689,6 @@ public: ...@@ -688,10 +689,6 @@ public:
stmt = Substitute(stmt, {{var_, idx}}); stmt = Substitute(stmt, {{var_, idx}});
return For(idx, IntImm(var_->dtype, 0), var_lanes_, ForKind::kSerial, stmt); return For(idx, IntImm(var_->dtype, 0), var_lanes_, ForKind::kSerial, stmt);
} }
// ProducerStore
Stmt VisitStmt_(const ProducerStoreNode *op) final {
LOG(FATAL) << "ProducerProvide cannot appear in a TIR PrimFunc";
}
private: private:
// analyzer // analyzer
...@@ -787,6 +784,10 @@ private: ...@@ -787,6 +784,10 @@ private:
} }
}; };
inline bool TargetHasSVE() {
return Target::Current()->GetFeature<Bool>("has_sve").value_or(false);
}
class LoopVectorizer : public StmtMutator { class LoopVectorizer : public StmtMutator {
public: public:
Stmt VisitStmt_(const ForNode *op) final { Stmt VisitStmt_(const ForNode *op) final {
...@@ -796,7 +797,7 @@ public: ...@@ -796,7 +797,7 @@ public:
if (!extent_as_int || extent_as_int->value < 1) { if (!extent_as_int || extent_as_int->value < 1) {
bool is_scalable_expr = bool is_scalable_expr =
CheckContains::ExprContains(op->extent, arith::IsVScaleCall); CheckContains::ExprContains(op->extent, arith::IsVScaleCall);
ICHECK(is_scalable_expr && arith::TargetHasSVE()) ICHECK(is_scalable_expr && TargetHasSVE())
<< "Failed to vectorize loop with extent " << op->extent << "Failed to vectorize loop with extent " << op->extent
<< " for target " << Target::Current(); << " for target " << Target::Current();
} }
...@@ -837,7 +838,10 @@ tvm::transform::Pass VectorizeLoop(bool enable_vectorize = true) { ...@@ -837,7 +838,10 @@ tvm::transform::Pass VectorizeLoop(bool enable_vectorize = true) {
return CreatePrimFuncPass(pass_func, 0, "tl.VectorizeLoop", {}); return CreatePrimFuncPass(pass_func, 0, "tl.VectorizeLoop", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.VectorizeLoop").set_body_typed(VectorizeLoop); TVM_FFI_STATIC_INIT_BLOCK({
namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.VectorizeLoop", VectorizeLoop);
});
} // namespace tl } // namespace tl
} // namespace tvm } // namespace tvm
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include "arith/ir_visitor_with_analyzer.h" #include "arith/ir_visitor_with_analyzer.h"
#include "tir/analysis/var_use_def_analysis.h" #include "tir/analysis/var_use_def_analysis.h"
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/op.h> #include <tvm/tir/op.h>
...@@ -447,7 +448,7 @@ private: ...@@ -447,7 +448,7 @@ private:
order_anno.push_back(Integer(op_info.order)); order_anno.push_back(Integer(op_info.order));
stage_anno.push_back(Integer(op_info.stage)); stage_anno.push_back(Integer(op_info.stage));
} }
Map<String, ObjectRef> for_annotations = op->annotations; Map<String, Any> for_annotations = op->annotations;
for_annotations.erase("tl_pipeline_group"); for_annotations.erase("tl_pipeline_group");
for_annotations.Set("software_pipeline_order", order_anno); for_annotations.Set("software_pipeline_order", order_anno);
for_annotations.Set("software_pipeline_stage", stage_anno); for_annotations.Set("software_pipeline_stage", stage_anno);
...@@ -636,9 +637,9 @@ private: ...@@ -636,9 +637,9 @@ private:
Stmt VisitStmt_(const ForNode *op) final { Stmt VisitStmt_(const ForNode *op) final {
int num_stages = 1; int num_stages = 1;
auto num_stages_anno = op->annotations.Get("num_stages"); auto num_stages_anno = op->annotations.Get("num_stages");
if (num_stages_anno.defined()) { if (num_stages_anno) {
ICHECK(num_stages_anno.as<IntImmNode>()); ICHECK(num_stages_anno->as<IntImmNode>());
num_stages = static_cast<int>(num_stages_anno.as<IntImmNode>()->value); num_stages = static_cast<int>(num_stages_anno->as<IntImmNode>()->value);
ICHECK(num_stages_ == 1) << "Nested pipeline not supported."; ICHECK(num_stages_ == 1) << "Nested pipeline not supported.";
} }
loop_stack_.emplace_back(op->loop_var, op->extent); loop_stack_.emplace_back(op->loop_var, op->extent);
...@@ -648,16 +649,16 @@ private: ...@@ -648,16 +649,16 @@ private:
Array<Integer> stage_info_array; Array<Integer> stage_info_array;
auto group_anno = op->annotations.Get("tl_pipeline_group"); auto group_anno = op->annotations.Get("tl_pipeline_group");
if (group_anno.defined()) { if (group_anno) {
group_info_array = Downcast<Array<Array<Integer>>>(group_anno); group_info_array = Downcast<Array<Array<Integer>>>(group_anno.value());
} }
auto order_anno = op->annotations.Get("tl_pipeline_order"); auto order_anno = op->annotations.Get("tl_pipeline_order");
if (order_anno.defined()) { if (order_anno) {
order_info_array = Downcast<Array<Integer>>(order_anno); order_info_array = Downcast<Array<Integer>>(order_anno.value());
} }
auto stage_anno = op->annotations.Get("tl_pipeline_stage"); auto stage_anno = op->annotations.Get("tl_pipeline_stage");
if (stage_anno.defined()) { if (stage_anno) {
stage_info_array = Downcast<Array<Integer>>(stage_anno); stage_info_array = Downcast<Array<Integer>>(stage_anno.value());
} }
PipelineInfo pipeline_info(group_info_array, order_info_array, PipelineInfo pipeline_info(group_info_array, order_info_array,
...@@ -686,8 +687,8 @@ private: ...@@ -686,8 +687,8 @@ private:
auto result = FilterByRole(op); auto result = FilterByRole(op);
Stmt grouped_for_node; Stmt grouped_for_node;
if (result.as<ForNode>() && group_anno.defined() && if (result.as<ForNode>() && group_anno && group_info_array.size() > 0 &&
group_info_array.size() > 0 && !is_emitting_producer_) { !is_emitting_producer_) {
GroupOpRewriter group_op_rewriter(pipeline_info_); GroupOpRewriter group_op_rewriter(pipeline_info_);
auto for_node = Downcast<For>(result); auto for_node = Downcast<For>(result);
grouped_for_node = group_op_rewriter(for_node); grouped_for_node = group_op_rewriter(for_node);
...@@ -707,7 +708,7 @@ private: ...@@ -707,7 +708,7 @@ private:
for_node.CopyOnWrite()->annotations.erase("tl_pipeline_order"); for_node.CopyOnWrite()->annotations.erase("tl_pipeline_order");
for_node.CopyOnWrite()->annotations.erase("tl_pipeline_stage"); for_node.CopyOnWrite()->annotations.erase("tl_pipeline_stage");
} }
if (is_emitting_producer_ || !group_anno.defined() || if (is_emitting_producer_ || !group_anno ||
group_info_array.size() == 0) { group_info_array.size() == 0) {
loop_stack_.pop_back(); loop_stack_.pop_back();
return for_node; return for_node;
...@@ -1230,8 +1231,10 @@ tvm::transform::Pass WarpSpecialized() { ...@@ -1230,8 +1231,10 @@ tvm::transform::Pass WarpSpecialized() {
return CreatePrimFuncPass(pass_func, 0, "tl.WarpSpecialized", {}); return CreatePrimFuncPass(pass_func, 0, "tl.WarpSpecialized", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.WarpSpecialized") TVM_FFI_STATIC_INIT_BLOCK({
.set_body_typed(WarpSpecialized); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.WarpSpecialized", WarpSpecialized);
});
} // namespace tl } // namespace tl
} // namespace tvm } // namespace tvm
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
/*! /*!
* \file warp_specialized_pipeline.cc * \file warp_specialized_pipeline.cc
* \brief Warp specialized Pipeline for cuda GPU (sm90+) * \brief Warp specialized Pipeline for cuda GPU (sm90+)
*/ */
#include <tvm/ffi/reflection/registry.h>
#include <tvm/tir/analysis.h> #include <tvm/tir/analysis.h>
#include <tvm/tir/builtin.h> #include <tvm/tir/builtin.h>
#include <tvm/tir/op.h> #include <tvm/tir/op.h>
...@@ -131,7 +113,7 @@ private: ...@@ -131,7 +113,7 @@ private:
Stmt VisitStmt_(const ForNode *op) final { Stmt VisitStmt_(const ForNode *op) final {
auto order_anno = op->annotations.Get("tl_pipeline_order"); auto order_anno = op->annotations.Get("tl_pipeline_order");
if (!order_anno.defined()) { if (!order_anno) {
return StmtExprMutator::VisitStmt_(op); return StmtExprMutator::VisitStmt_(op);
} }
...@@ -281,8 +263,10 @@ tvm::transform::Pass RewriteWgmmaSync() { ...@@ -281,8 +263,10 @@ tvm::transform::Pass RewriteWgmmaSync() {
return CreatePrimFuncPass(pass_func, 0, "tl.RewriteWgmmaSync", {}); return CreatePrimFuncPass(pass_func, 0, "tl.RewriteWgmmaSync", {});
} }
TVM_REGISTER_GLOBAL("tl.transform.RewriteWgmmaSync") TVM_FFI_STATIC_INIT_BLOCK({
.set_body_typed(RewriteWgmmaSync); namespace refl = tvm::ffi::reflection;
refl::GlobalDef().def("tl.transform.RewriteWgmmaSync", RewriteWgmmaSync);
});
} // namespace tl } // namespace tl
} // namespace tvm } // namespace tvm
...@@ -4,6 +4,8 @@ from tilelang import tvm as tvm ...@@ -4,6 +4,8 @@ from tilelang import tvm as tvm
import tilelang.language as T import tilelang.language as T
import torch import torch
tilelang.disable_cache()
def matmul(M, N, K, block_M, block_N, block_K, dtype="float16", accum_dtype="float"): def matmul(M, N, K, block_M, block_N, block_K, dtype="float16", accum_dtype="float"):
num_stages = 0 num_stages = 0
......
...@@ -40,8 +40,8 @@ def tl_matmul( ...@@ -40,8 +40,8 @@ def tl_matmul(
assert in_dtype in [ assert in_dtype in [
"float16", "float16",
"bfloat16", "bfloat16",
"e4m3_float8", "float8_e4m3",
"e5m2_float8", "float8_e5m2",
"int8", "int8",
], "Currently only float16 and int8 are supported" ], "Currently only float16 and int8 are supported"
assert out_dtype in [ assert out_dtype in [
...@@ -52,7 +52,7 @@ def tl_matmul( ...@@ -52,7 +52,7 @@ def tl_matmul(
micro_size_x = micro_size_y = micro_size_k = 16 micro_size_x = micro_size_y = micro_size_k = 16
is_float8 = in_dtype in ["e4m3_float8", "e5m2_float8"] is_float8 = in_dtype in ["float8_e4m3", "float8_e5m2"]
if out_dtype == "int32" or is_float8: if out_dtype == "int32" or is_float8:
micro_size_k = 32 micro_size_k = 32
...@@ -220,4 +220,5 @@ def test_assert_tl_matmul_bfloat16(): ...@@ -220,4 +220,5 @@ def test_assert_tl_matmul_bfloat16():
if __name__ == "__main__": if __name__ == "__main__":
tilelang.testing.main() # tilelang.testing.main()
test_assert_tl_matmul_bfloat16()
# ruff: noqa
from tilelang import tvm as tvm
import tilelang.testing
import tilelang.language as T
import torch
from typing import Optional, Union
from einops import rearrange, repeat
tilelang.testing.set_random_seed(42)
def naive_nsa_ref(q: torch.Tensor,
k: torch.Tensor,
v: torch.Tensor,
g_slc: torch.Tensor,
g_swa: torch.Tensor,
block_indices: torch.LongTensor,
block_counts: Optional[Union[torch.LongTensor, int]] = None,
block_size: int = 64,
window_size: int = 0,
scale: Optional[float] = None,
cu_seqlens: Optional[torch.LongTensor] = None,
head_first: bool = False) -> torch.Tensor:
if scale is None:
scale = k.shape[-1]**-0.5
if cu_seqlens is not None:
assert q.shape[0] == 1, "batch size must be 1 when cu_seqlens are provided"
if head_first:
raise RuntimeError(
"Sequences with variable lengths are not supported for head-first mode")
if head_first:
q, k, v, block_indices = map(lambda x: rearrange(x, 'b h t d -> b t h d'),
(q, k, v, block_indices))
g_slc, g_swa = map(lambda x: rearrange(x, 'b h t -> b t h'), (g_slc, g_swa))
if isinstance(block_counts, torch.Tensor):
block_counts = rearrange(block_counts, 'b h t -> b t h')
dtype = q.dtype
G = q.shape[2] // k.shape[2]
BS = block_size
S = block_indices.shape[-1]
k, v, block_indices = (repeat(x, 'b t h d -> b t (h g) d', g=G) for x in (k, v, block_indices))
if isinstance(block_counts, torch.Tensor):
block_counts = repeat(block_counts, 'b t h -> b t (h g)', g=G)
c = torch.arange(S).repeat_interleave(BS).unsqueeze(1).expand(-1, q.shape[2]).to(q.device)
q, k, v = map(lambda x: x.float(), (q, k, v))
o_slc = torch.zeros_like(v)
o_swa = torch.zeros_like(v) if window_size > 0 else None
varlen = True
if cu_seqlens is None:
varlen = False
B, T = q.shape[:2]
cu_seqlens = torch.cat(
[block_indices.new_tensor(range(0, B * T, T)),
block_indices.new_tensor([B * T])])
for i in range(len(cu_seqlens) - 1):
if not varlen:
q_b, k_b, v_b, g_slc_b, g_swa_b, i_b = q[i], k[i], v[i], g_slc[i], g_swa[
i], block_indices[i]
if isinstance(block_counts, torch.Tensor):
s_b = block_counts[i]
else:
s_b = block_counts
else:
T = cu_seqlens[i + 1] - cu_seqlens[i]
q_b, k_b, v_b, g_slc_b, g_swa_b, i_b = map(
lambda x: x[0][cu_seqlens[i]:cu_seqlens[i + 1]],
(q, k, v, g_slc, g_swa, block_indices))
if isinstance(block_counts, torch.Tensor):
s_b = block_counts[0][cu_seqlens[i]:cu_seqlens[i + 1]]
else:
s_b = block_counts
i_b = i_b.unsqueeze(-1) * BS + i_b.new_tensor(range(BS))
# [T, S*BS, HQ]
i_b = i_b.view(T, block_indices.shape[2], -1).transpose(1, 2)
for i_q in range(T):
# [HQ, D]
q_i = q_b[i_q] * scale
# [HQ]
g_slc_i = g_slc_b[i_q]
# [HQ]
g_swa_i = g_swa_b[i_q]
# [S*BS, HQ]
i_i = i_b[i_q]
# [HQ]
if isinstance(block_counts, torch.Tensor):
s_i = s_b[i_q]
else:
s_i = s_b
# [S*BS, HQ, -1]
k_i_slc, v_i_slc = map(
lambda x: x.gather(
0,
i_i.clamp(0, T - 1).unsqueeze(-1).expand(*i_i.shape, x.shape[-1])), (k_b, v_b))
# [S*BS, HQ]
attn_slc = torch.einsum('h d, n h d -> n h', q_i, k_i_slc).masked_fill(
torch.logical_or(i_i < 0, i_i > i_q) |
(c >= s_i if block_counts is not None else False), float('-inf')).softmax(0)
if not varlen:
o_slc[i, i_q] = torch.einsum('n h, n h v -> h v', attn_slc,
v_i_slc) * g_slc_i.unsqueeze(-1)
else:
o_slc[0][cu_seqlens[i] + i_q] = torch.einsum('n h, n h v -> h v', attn_slc,
v_i_slc) * g_slc_i.unsqueeze(-1)
if window_size > 0:
k_i_swa, v_i_swa = map(lambda x: x[max(0, i_q - window_size + 1):i_q + 1],
(k_b, v_b))
attn_swa = torch.einsum('h d, n h d -> n h', q_i, k_i_swa).softmax(0)
if not varlen:
o_swa[i, i_q] = torch.einsum('n h, n h v -> h v', attn_swa,
v_i_swa) * g_swa_i.unsqueeze(-1)
else:
o_swa[0][cu_seqlens[i] + i_q] = torch.einsum('n h, n h v -> h v', attn_swa,
v_i_swa) * g_swa_i.unsqueeze(-1)
if head_first:
o_slc = rearrange(o_slc, 'b t h d -> b h t d')
o_swa = rearrange(o_swa, 'b t h d -> b h t d')
return o_slc.to(dtype) + o_swa.to(dtype) if o_swa is not None else o_slc.to(dtype)
def native_sparse_attention(batch,
heads,
seq_len,
dim,
is_causal,
scale=None,
block_size=64,
groups=16,
selected_blocks=16,
num_stages=0,
threads=32):
if scale is None:
scale = (1.0 / dim)**0.5 * 1.44269504 # log2(e)
else:
scale = scale * 1.44269504 # log2(e)
head_kv = heads // groups
q_shape = [batch, seq_len, heads, dim]
kv_shape = [batch, seq_len, head_kv, dim]
block_indices_shape = [batch, seq_len, head_kv, selected_blocks]
block_indices_dtype = "int32"
dtype = "float16"
accum_dtype = "float"
block_S = block_size
block_T = min(128, tilelang.math.next_power_of_2(dim))
NK = tilelang.cdiv(dim, block_T)
NV = tilelang.cdiv(dim, block_T)
assert NK == 1, "The key dimension can not be larger than 256"
S = selected_blocks
G = groups
BS = block_S
BK = BV = block_T
@T.prim_func
def native_sparse_attention(
Q: T.Tensor(q_shape, dtype),
K: T.Tensor(kv_shape, dtype),
V: T.Tensor(kv_shape, dtype),
BlockIndices: T.Tensor(block_indices_shape, block_indices_dtype),
Output: T.Tensor(q_shape, dtype),
):
with T.Kernel(seq_len, NV, batch * head_kv, threads=threads) as (bx, by, bz):
Q_shared = T.alloc_shared([G, BK], dtype)
K_shared = T.alloc_shared([BS, BK], dtype)
V_shared = T.alloc_shared([BS, BV], dtype)
O_shared = T.alloc_shared([G, BV], dtype)
acc_s = T.alloc_fragment([G, BS], accum_dtype)
acc_s_cast = T.alloc_fragment([G, BS], dtype)
acc_o = T.alloc_fragment([G, BV], accum_dtype)
scores_max = T.alloc_fragment([G], accum_dtype)
scores_max_prev = T.alloc_fragment([G], accum_dtype)
scores_scale = T.alloc_fragment([G], accum_dtype)
scores_sum = T.alloc_fragment([G], accum_dtype)
logsum = T.alloc_fragment([G], accum_dtype)
i_t, i_v, i_bh = bx, by, bz
i_b, i_h = i_bh // head_kv, i_bh % head_kv
NS = S
T.copy(Q[i_b, i_t, i_h * G:(i_h + 1) * G, :], Q_shared)
T.fill(acc_o, 0)
T.fill(logsum, 0)
T.fill(scores_max, -T.infinity(accum_dtype))
for i in T.Pipelined(NS, num_stages=num_stages):
i_s = BlockIndices[i_b, i_t, i_h, i] * BS
if i_s <= i_t and i_s >= 0:
# [BS, BK]
T.copy(K[i_b, i_s:i_s + BS, i_h, :], K_shared)
if is_causal:
for i, j in T.Parallel(G, BS):
acc_s[i, j] = T.if_then_else(i_t >= (i_s + j), 0,
-T.infinity(acc_s.dtype))
else:
T.clear(acc_s)
T.gemm(
Q_shared,
K_shared,
acc_s,
transpose_B=True,
policy=T.GemmWarpPolicy.FullRow)
# Softmax
T.copy(scores_max, scores_max_prev)
T.fill(scores_max, -T.infinity(accum_dtype))
T.reduce_max(acc_s, scores_max, dim=1, clear=True)
for i in T.Parallel(G):
scores_scale[i] = T.exp2(scores_max_prev[i] * scale - scores_max[i] * scale)
for i, j in T.Parallel(G, BS):
acc_s[i, j] = T.exp2(acc_s[i, j] * scale - scores_max[i] * scale)
T.reduce_sum(acc_s, scores_sum, dim=1)
for i in T.Parallel(G):
logsum[i] = logsum[i] * scores_scale[i] + scores_sum[i]
T.copy(acc_s, acc_s_cast)
# Rescale
for i, j in T.Parallel(G, BV):
acc_o[i, j] *= scores_scale[i]
# V * softmax(Q * K)
T.copy(V[i_b, i_s:i_s + BS, i_h, i_v * BV:(i_v + 1) * BV], V_shared)
T.gemm(acc_s_cast, V_shared, acc_o, policy=T.GemmWarpPolicy.FullRow)
for i, j in T.Parallel(G, BV):
acc_o[i, j] /= logsum[i]
T.copy(acc_o, O_shared)
T.copy(O_shared, Output[i_b, i_t, i_h * G:(i_h + 1) * G, i_v * BV:(i_v + 1) * BV])
return native_sparse_attention
def run_native_sparse_attention(batch,
heads,
seq_len,
dim,
is_causal,
scale=None,
block_size=64,
groups=16,
selected_blocks=16,
num_stages=0,
threads=32):
dtype = torch.float16
head_kv = heads // groups
program = native_sparse_attention(batch, heads, seq_len, dim, is_causal, scale, block_size,
groups, selected_blocks, num_stages, threads)
kernel = tilelang.compile(program, out_idx=-1)
Q = torch.randn((batch, seq_len, heads, dim), dtype=dtype).cuda()
K = torch.randn((batch, seq_len, head_kv, dim), dtype=dtype).cuda()
V = torch.randn((batch, seq_len, head_kv, dim), dtype=dtype).cuda()
g_slc = torch.ones((batch, seq_len, heads), dtype=dtype).cuda()
g_swa = torch.ones((batch, seq_len, heads), dtype=dtype).cuda()
block_indices = torch.full((batch, seq_len, head_kv, selected_blocks),
seq_len,
dtype=torch.long,
device='cuda')
for b in range(batch):
for t in range(seq_len):
for h in range(head_kv):
i_i = torch.randperm(max(1, (t // block_size)))[:selected_blocks]
block_indices[b, t, h, :len(i_i)] = i_i
block_indices = block_indices.sort(-1)[0]
block_counts = torch.randint(1, selected_blocks + 1, (batch, seq_len, head_kv), device='cuda')
out = kernel(Q, K, V, block_indices.to(torch.int32))
ref = naive_nsa_ref(
q=Q,
k=K,
v=V,
g_slc=g_slc,
g_swa=g_swa,
block_indices=block_indices,
block_counts=block_counts,
block_size=block_size,
scale=scale,
)
torch.testing.assert_close(ref, out, atol=1e-2, rtol=1e-2)
def test_tilelang_kernel_deepseek_nsa():
# disable pipeline
run_native_sparse_attention(
batch=2,
heads=64,
seq_len=1,
dim=16,
is_causal=True,
scale=None,
block_size=32,
groups=16,
selected_blocks=16,
num_stages=0,
threads=32)
# enable pipeline
run_native_sparse_attention(
batch=2,
heads=64,
seq_len=1,
dim=16,
is_causal=True,
scale=None,
block_size=32,
groups=16,
selected_blocks=16,
num_stages=2,
threads=32)
if __name__ == "__main__":
tilelang.testing.main()
...@@ -97,7 +97,7 @@ def test_fp4_fp16_convert_close(): ...@@ -97,7 +97,7 @@ def test_fp4_fp16_convert_close():
block_K, block_K,
"float16", "float16",
) )
print(program.script())
kernel = tilelang.compile(program, out_idx=[1]) kernel = tilelang.compile(program, out_idx=[1])
B = torch.randint(0, 16, (N, K // 2), dtype=torch.uint8, device="cuda").to(torch.uint8) B = torch.randint(0, 16, (N, K // 2), dtype=torch.uint8, device="cuda").to(torch.uint8)
...@@ -642,4 +642,5 @@ def test_assert_tl_matmul_with_ladder_weight_only_transform_block_reduce_int4(): ...@@ -642,4 +642,5 @@ def test_assert_tl_matmul_with_ladder_weight_only_transform_block_reduce_int4():
if __name__ == "__main__": if __name__ == "__main__":
tilelang.testing.main() # tilelang.testing.main()
test_fp4_fp16_convert_close()
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment