- 13 Sep, 2023 1 commit
-
-
Paul Fultz II authored
-
- 18 Aug, 2023 1 commit
-
-
Paul Fultz II authored
-
- 10 Aug, 2023 1 commit
-
-
Krzysztof Drewniak authored
This PR constitutes the MIGraphX-side changes needed to not break the build in the presence of ROCmSoftwarePlatform/rocMLIR#1136 , and updates what data is sent in to MLIR during the kernel generation and tuning process.
-
- 06 Aug, 2023 1 commit
-
-
Paul Fultz II authored
-
- 30 Jul, 2023 1 commit
-
-
Paul Fultz II authored
* Add initial tuning support * Format * Add extra param * Format * Use exauhstive flag * Format * Set expected shapes * Format * Format * Fix missing symbol * Format * Add missing license header * Format * Update src/targets/gpu/include/migraphx/gpu/mlir.hpp
-
- 16 Jul, 2023 1 commit
-
-
Umang Yadav authored
-
- 05 Jul, 2023 1 commit
-
-
kahmed10 authored
Fixes the failing test case in #1815. Added a test that would otherwise fail without the change.
-
- 29 Jun, 2023 1 commit
-
-
Krzysztof Drewniak authored
Bump MLIR commit to include latest supported pointwise ops. Expand the MLIR approve list Ensure that operations such as tanh() that don't have integer implementations (at least in MLIR) aren't used within MLIR modules. Add additional tests.
-
- 22 Jun, 2023 1 commit
-
-
Zhuoran Yin authored
Add mlir quant_dot operator support
-
- 19 May, 2023 1 commit
-
-
Zhuoran Yin authored
Co-authored-by:Paul Fultz II <pfultz2@yahoo.com>
-
- 17 May, 2023 1 commit
-
-
Chris Austen authored
Move CI to support the rocm5.5 release
-
- 04 May, 2023 1 commit
-
-
Zhuoran Yin authored
Exposed the mlir_enabled() call the decide for lowering pipeline's enablement Disabled the rewrite quantization pipeline in mlir compilation Added quant convolution as anchor ops Fixed the return type expectations Added the fall back hip implementation for quantizelinear and dequantizelinear Will need advises to improve the implementation for quantizelinear
-
- 13 Apr, 2023 1 commit
-
-
Zhuoran Yin authored
-
- 06 Apr, 2023 1 commit
-
-
Charlie Lin authored
Examples.. bin/driver verify /codes/onnx_models/resnet50-v1-7/resnet50-v1-7.onnx --split-single-dyn-dim --batch 3 --dyn-input-dim @data "[{min:1, max:4}, 3, 224, 224]" bin/driver compile /codes/onnx_models/resnet50-v1-7/resnet50-v1-7.onnx --split-single-dyn-dim --default-dyn-dim "{min:1, max:10}" --output resnet50_batch1-10.mxr bin/driver perf resnet50_batch1-10.mxr --batch 4
-
- 27 Mar, 2023 1 commit
-
-
Manupa Karunaratne authored
* [MLIR] add dot offloads with manual tuning support * This commit adds dot + pointwise fusion support along with manual tuning using rocMLIR.
-
- 18 Mar, 2023 1 commit
-
-
Umang Yadav authored
Fixes #1595
-
- 31 Jan, 2023 1 commit
-
-
Umang Yadav authored
Added CMakeFlag for hipRTC. MIGRAPHX_USE_HIPRTC. Added stages in Jenkins for hipRTC. Fixes for some of the pending issues from hipRTC.
-
- 06 Dec, 2022 2 commits
-
-
Ted Themistokleous authored
Need this for when we debug and use MIGRAPHX_TRACE_EVAL() to show tuples Without this we break when reading our buffer due to the use of visit() This came up as part of #1283 debugging.
-
jungpark-mlir authored
Update dialect registration interface Update 2nd build pipeline call and use full arch name
-
- 27 Oct, 2022 1 commit
-
-
Chris Austen authored
Upgraded Dockerfiles and fixed tidy issues to make Ubuntu 20.04 and ROCm 5.3.0 the default
-
- 18 Oct, 2022 1 commit
-
-
Paul Fultz II authored
* Enable non-standard shape * Use perfdb for non xdlops * Fix transpose+broadcast strides Co-authored-by:jungpark-mlir <jungwook.park@amd.com>
-
- 13 Oct, 2022 1 commit
-
-
Charlie Lin authored
Removes use_dynamic_same_auto_pad Change padding_mode to be used for dynamic padding Move compute_padded_shape to pad_calc.cpp as it will be used in other dynamic padding cases Fix same_lower compute_padded_shape bug and add a test.
-
- 04 Oct, 2022 1 commit
-
-
Ted Themistokleous authored
Stream sync changes and associated API level changes
-
- 29 Sep, 2022 1 commit
-
-
Umang Yadav authored
Improvements/Additions to be made: changes for the quant_convolution, changes for the deconvolution, Macros for MIOpen status checks
-
- 28 Sep, 2022 1 commit
-
-
Umang Yadav authored
test_gpu_pack_int8_args fails on gfx908 machine, because it doesn't set compute_fp32 flag correctly. This PR fixes the test such that it checks for the device-name, and rocblas-versions and sets this flag accordingly.
-
- 27 Sep, 2022 1 commit
-
-
Ted Themistokleous authored
Implement operator for CPU and GPU implementations
-
- 23 Sep, 2022 1 commit
-
-
Paul Fultz II authored
* Remove device functions * Update tests
-
- 16 Sep, 2022 1 commit
-
-
Umang Yadav authored
* fix typo for add_sigmoid
-
- 15 Sep, 2022 1 commit
-
-
Lixun Zhang authored
* Replaced `find_library` with `find_package` to locate MLIR static library * Unified the include dir for headers and remove backward compatibility * Embedded the external/include dir into the exported library
-
- 04 Aug, 2022 1 commit
-
-
Charlie Lin authored
* Dynamic shape handling in shape object * rewrite empty lens multibroadcast test * Shape class changes to handle dynamic * More throw errors for functions that don't make sense for dynamic shape * Print output changes * Serialization changes * Fixing serialization errors * Remove const on dyn_dim copy getters * Dynamic shape tests * Fix serialize errors * Add dyn_data struct to avoid ambiguous constructor * Tidy fix: emplace_back() over for loop * Tidy fix: use move * Use std::initializer_list in constructor Reverts the dyn_data struct change Should get around the ambiguous braced initialization list error * avoid typedef * element_space, min,max,opt _lens change * formatting * Comments fix * dynamic bytes() test * Seralize and reflect changes * formatting * Test the dynamic lens functions * progress * Formatting * Dynamic conv draft progress * Add operator<< tests for coverage * Coverage update * Add to conv dynamic batch test * Dynamic image size test * Dynamic weight handling * Dyn image shape test change, fix dyn weight cond * Comment update * Dynamic weights shape test and fix * Use ternary operator * Tidy fixes * Handle dynamic graph input shapes in ONNX parser * Formatting * Handle dynamic shape for convolution * formatting * cppcheck fixes * Add onnx test files * Fix typo * Disable auto_pad for dynamic input shape * check_shapes object checks for allowing dynamic shapes * Fix any_of * Change to maintain const objectness * Formatting * Check shapes allow dynamic * Refactor compute_shape() call into op.compute() Allows for per operator differences with handling dynamic shape Fix operation.hpp change to use the generator * Comment fix * Refactor normalize_attributes() calls to use max_lens() * Comment addition * Update other normalize_attributes() calls * Change to using constructor and add tests * Use const member function * Add more dynamic shape support * Add tests for error code coverage * Fix opt shape bug and add shape tests * capture all by ref * Fix typo with img shape calculation * Add more tests * dynamic auto pad attempt Linker error with pad_calc.cpp * Fix parse dyn auto_pad Should only need to use dynamic auto pad when the image shape or kernel shape are dynamic. For a dynamic batch size, the auto pad calculation is the same. * Fix linking error * Fix auto_pad bug Fixed input tensor with auto_pad setting on * auto_pad onnx tests * Fix auto_pad calculation, evaluate in ref_conv add ref_ops tests * Add shape tests, fix bugs * Refactor first two output dynamic len calculation * Conv MLIR test update * i64 MLIR test fix * Fix MLIR test typo Co-authored-by:Chris Austen <causten@users.noreply.github.com>
-
- 29 Jul, 2022 1 commit
-
-
Umang Yadav authored
Currently, while copying a host buffer to the device, it first registers/maps the host buffer pointer to address space of the device. If the host buffer has been allocated by the hipHostMalloc then, it is implicitly registered to the device's address space, and no need to register again. This PR adds a check for the same.
-
- 03 Jul, 2022 1 commit
-
-
Paul Fultz II authored
* Add mlir c api * Formatting * Create a type attribute * Formatting * Parse module * Formatting * Add mlir dump function * Add test case * Formatting * Fix tidy issues * Update mlit version * Update to newer mlir * Format * Move mlir to the gpu and update the test * Formatting * Fix bug when appending module * Format * Remove old cmake flag * Update message * Add return * Format * Add mlir_compile * Format * Register dialect * Handle unsinged integers * Dont provide output for return instruction * Format * Add code to insert memrefs * Format * Add mlir verification * Formatting * Enable pointwise_fusion * Disable eliminate_data_type * Set kernal name * Format * Fix device name * Formatting * Fix output arg * Format * Updates * Upate hash * Add fuse_mlir pass * Format * Add fuse mlir * Format * Update mlir * Sort parameter names * Format * Reenable disabled passes * Remove old mlir conv * Remove asym default padding * Add more verbose tracing * Format * Fix compilation errors * Format * Whitelist operators * Format * Add namespace * Format * Update triple * Format * Use func dialect * Format * Use func.return * Format * Upgrade mlir version * Add comment * Handle symetrical padding * Format * Cleanup debug output * Format * List failed tests * Move mlir compile to jit pipeline * Format * Update version * Add source locations * Format * Correctly add module * Format * Update failed tests * Fix failures when mlir is disabled * Format * Update mlir version * Check type for fp32 * Format * Remove failed test * Update mlir in driver * Tidy fixes * Foramt * Tidy fixes * Format * Fix const * Remove from requirements * Fix cmake version * Fix tidy warning * Use another ifdef * Fix tidy * Other tidy fix * Format * Update hash * Add missing license files * Format * Format * Fix fnction name
-
- 22 Jun, 2022 1 commit
-
-
Ted Themistokleous authored
Updated each source file in the repo with the existing license.
-
- 17 Jun, 2022 1 commit
-
-
kahmed10 authored
* add allocate op header * formatting * add replace_allocate pass * formatting * move output param to remove_allocate pass * formatting * fix bugs in replace_allocate pass * formatting * fix verify if tests * formatting * move if op logic * formatting * cleanup lowering * cleanup lowering * formatting * fix tidy * formatting * fix tidy * add cpu allocate check * formatting * change cpu allocate in pass * formatting * add some tests for replace_allocate pass * formatting * pass by ref * fix run_pass * formatting * update variable name for module * update dce to use contains() and fix tidy * formatting * update cppcheck * add if test * formatting * add if test * rename var to mod_output_names * formatting * remove conditional * update allocate op and tests * formatting * update replace_allocate tests * update create_output_names() and conditional in replace_allocate * formatting * remove extra variable in replace_allocate * update tools script for allocation_model Co-authored-by:
Umang Yadav <29876643+umangyadav@users.noreply.github.com> Co-authored-by:
Chris Austen <causten@users.noreply.github.com> Co-authored-by:
Paul Fultz II <pfultz2@yahoo.com>
-
- 06 May, 2022 1 commit
-
-
Paul Fultz II authored
Add compile tests for gpu math functions
-
- 29 Mar, 2022 1 commit
-
-
Paul Fultz II authored
This adds the infrastructure so we can compile everything in parallel, whereas before only pointwise kernels were compiled in parallel. This will also directly integrate with lowering and the gpu-driver. The kernels for pointwise and roialign are using this infrastructure. Scatternd is not since it does require standard shape. This also makes it easier to add new runtime compiled kernels in the future.
-
- 25 Feb, 2022 1 commit
-
-
Paul Fultz II authored
wrapped in a any_ptr class so the type can be checked at runtime for a mismatch.
-
- 09 Feb, 2022 1 commit
-
-
Paul Fultz II authored
There is now a MIGRAPHX_DISABLE_POINTWISE_FUSION to disable it
-
- 17 Nov, 2021 1 commit
-
-
Paul Fultz II authored
Currently, eliminate_contiguous will never remove contiguous for operators that use module inputs due to the fact that it doesn't pass the module inputs to compute_shape. - Update to pass the module inputs correctly to compute_shape - Fix the overloads of compute_shape so that when passed an empty vector of module inputs it will call the overload without module inputs - Add tests with contiguous and pointwise module function. - Move add_pointwise function to a seperate header to reuse across different tests
-
- 08 Oct, 2021 1 commit
-
-
Umang Yadav authored
Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs. Aim is to have the definition of dot operator as C = A . B without having alpha or beta. In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
-