- 25 Feb, 2022 3 commits
-
-
Paul Fultz II authored
Add with_type to shape class
-
Paul Fultz II authored
Needed for custom_op so we can generically convert the C type back to the C++ type in the function pointer.
-
Paul Fultz II authored
wrapped in a any_ptr class so the type can be checked at runtime for a mismatch.
-
- 24 Feb, 2022 1 commit
-
-
Paul Fultz II authored
Make doc/CMakeLists.txt standalone Switch to use rocm-cmake modules for document generation Add CONFIGURE_DEPENDS to file(GLOB) so it will update without an explicit cmake run Add STRINGS property for build type to make it easier to switch build types with ccmake Various fixes and improvements
-
- 23 Feb, 2022 1 commit
-
-
Shucai Xiao authored
This PR is the resolve two problems in the issue#999, i.e., non_standard_shape input to reshape and reduce_mean. Three fixes: Any operator that has a standard shape requirement will add a contiguous input for its input. Eliminate_contiguous, when computing whether a contiguous can be removed, we should use all the updated args, not just the one that is being checked. In two optimization in the simplify_reshape, we remove the contiguous in the reshaper name list, since eliminate_contiguous will remove the contiguous if it can be removed. the solution is add an attribute to the operator that requires standard input shape, then in the auto_contiguous pass, add a contiguous to every input of such operators.
-
- 16 Feb, 2022 2 commits
-
-
Umang Yadav authored
Support nonstandard shapes like slice, broadcast and transpose for the unsqueeze op
-
kahmed10 authored
-
- 11 Feb, 2022 1 commit
-
-
kahmed10 authored
* add submodule test * remove for loop * simplify reshape test
-
- 09 Feb, 2022 2 commits
-
-
Paul Fultz II authored
There is now a MIGRAPHX_DISABLE_POINTWISE_FUSION to disable it
-
Umang Yadav authored
Support slice, broadcast and transpose shapes for the squeeze op.
-
- 08 Feb, 2022 2 commits
-
-
Paul Fultz II authored
This causes incorrect memory coloring, which was causing the accuracy failures in the vision model when enabling the pointwise fusions. Resnet50, inceptionv3 and inceptionv4 do verify now in the driver.
-
Paul Fultz II authored
Enforce types to avoid compilation error in pointwise fusions This fixes compile failure: gpt-2, fp16 on Navi
-
- 02 Feb, 2022 1 commit
-
-
Paul Fultz II authored
Currently, MIGRAPHX_TRACE_EVAL=2 prints out the entire output buffer, but this can produce a lot of output. To make it easier to inspect and debug, using MIGRAPHX_TRACE_EVAL=2 now only prints 10 elements from the buffer(the first 5 and last 5) and shows any fp classifications found in the buffer(ie nans, infinity, etc). The previous behavior can still be enabled with MIGRAPHX_TRACE_EVAL=3.
-
- 31 Jan, 2022 1 commit
-
-
Shucai Xiao authored
* use the parse_resize to parse the upsample operator
-
- 28 Jan, 2022 2 commits
-
-
Paul Fultz II authored
* Enable auto vectorization * Handle vector types with convert function * Dont vectorize when it will cause problems with preload
-
turneram authored
* Add mean op onnx parser and unit tests * Refactor parse_mean to use add_broadcastable_binary_op
-
- 27 Jan, 2022 1 commit
-
-
Umang Yadav authored
allow nonstd shape for the arg ops, non-standard shapes include broadcast, slice and transpose
-
- 26 Jan, 2022 1 commit
-
-
turneram authored
Add HardSwish to HardSigmoid parser HardSwish formula is y = x * HardSigmoid<alpha=1/6, beta=0.5>(x) HardSigmoid parser sets alpha to 1/6 and adds the mul instruction if op name is HardSwish Resolves #1062
-
- 21 Jan, 2022 4 commits
-
-
turneram authored
Add onnx parser for operator GreaterOrEqual
-
turneram authored
Add onnx parser and unit tests for Softsign
-
turneram authored
* Add onnx parser and unit test
-
Paul Fultz II authored
* Improve handling of generator expressions when getting the flags for hip
-
- 17 Jan, 2022 1 commit
-
-
Paul Fultz II authored
Make clip a pointwise op
-
- 11 Jan, 2022 1 commit
-
-
turneram authored
Add HardSigmoid onnx parser and unit tests Produces mathematical equivalent to ONNX operator through combination of existing pointwise ops. Resolves #1028
-
- 10 Jan, 2022 1 commit
-
-
Paul Fultz II authored
* Add matcher for conv_bias pointwise * Add fusion op
-
- 05 Jan, 2022 1 commit
-
-
turneram authored
Fix bug caused by casting time seed to float
-
- 09 Dec, 2021 2 commits
-
-
Shucai Xiao authored
Changed the number of threads in a block from 256 to 128 Increased the max number of blocks in the kernel from 256 to 1M. For the case that the axis is the last dimension, we removed the computation of index since it is not required. With these change, we can get about 2x speedup compared to the develop branch for the softmax op used in the BertSquad model.
-
Paul Fultz II authored
Fuse last instruction in fuse_pointwise This is also fixes a bug with using an invalid iterator.
-
- 08 Dec, 2021 1 commit
-
-
Paul Fultz II authored
-
- 07 Dec, 2021 1 commit
-
-
Paul Fultz II authored
simple variable rename
-
- 02 Dec, 2021 1 commit
-
-
Paul Fultz II authored
Fix pointwise compile error with half sqrt
-
- 30 Nov, 2021 2 commits
-
-
turneram authored
Fix whitespace bug in fusable_conv matcher and add unit test
-
Paul Fultz II authored
-
- 25 Nov, 2021 1 commit
-
-
Shucai Xiao authored
Resolves a problem in parsing the ssd-10 model. The problem is, after inserting contiguous in the auto_contiguous pass, standard output shape of some operators becomes non-standard. Then, if the next operator requires standard input shape, an exception is throw. For example, if we pass the following model: Input (standard shape) -> transpose (transposed) -> softmax (transposed) -> transpose (standard) -> gather. It works fine, and no contiguous is required. In the auto_contiguous pass, a contiguous is inserted after the first transpose. Then we need to replace the first transpose with the contiguous and recompute all shapes. When it comes to the gather operator, its input is a transposed shape, and an exception is thrown. The solution is in the recompute_shape() function. If it is called by the auto_contiguous pass and shape of an instruction is changed, and the shape is non_standard, we do not recompute shape of its output. The reason is: since its output shape is non_standard, a contiguous op will be added after the instruction, which will recompute shape for later operators.
-
- 24 Nov, 2021 1 commit
-
-
Paul Fultz II authored
* Check jit kernels files with clang-tidy
-
- 22 Nov, 2021 1 commit
-
-
kahmed10 authored
Allows --fp16 to be used in the driver to compare the target fp16 result and the ref fp32 result.
-
- 18 Nov, 2021 1 commit
-
-
Paul Fultz II authored
Do compilation in parallel
-
- 17 Nov, 2021 1 commit
-
-
Paul Fultz II authored
Currently, eliminate_contiguous will never remove contiguous for operators that use module inputs due to the fact that it doesn't pass the module inputs to compute_shape. - Update to pass the module inputs correctly to compute_shape - Fix the overloads of compute_shape so that when passed an empty vector of module inputs it will call the overload without module inputs - Add tests with contiguous and pointwise module function. - Move add_pointwise function to a seperate header to reuse across different tests
-
- 15 Nov, 2021 1 commit
-
-
kahmed10 authored
Currently we have the option of passing in --batch to the driver to change the batch size when the model has a dynamic dim value. We can use this flag to adjust the perf report's rate.
-
- 11 Nov, 2021 1 commit
-
-
Paul Fultz II authored
This enables the pointwise fusions using the MIGRAPHX_ENABLE_POINTWISE_FUSION env variable. Its disabled by default since MIOpen fusions need to be refactored. This also adds a compile_ops pass to compile the pointwise modules. All tests except test_gpu_fast_math passes with MIGRAPHX_ENABLE_POINTWISE_FUSION=1 set.
-