- 22 Mar, 2022 1 commit
-
-
Paul Fultz II authored
Operators using arg.reshape() method the lifetime will be extended.
-
- 18 Mar, 2022 2 commits
-
-
turneram authored
Add exclusive and reverse modes to gpu implementation of prefix_scan_sum, which completes support for ONNX op CumSum
-
Paul Fultz II authored
The get_context may change in the future(when we support multi-targets) so make this experimental for now.
-
- 14 Mar, 2022 1 commit
-
-
Paul Fultz II authored
* Show the operator fields in the driver
-
- 11 Mar, 2022 1 commit
-
-
Shucai Xiao authored
The module::debug_print(ins) is very slow, which makes the trave_eval==1/2 very slow. The reason is printing an ins involves search the whole module to get the instruction, the print it. This change is to fix that by calling module::print() to get names of all instructions of a program, then print the instruction by getting its name from a hash map.
-
- 04 Mar, 2022 1 commit
-
-
bpickrel authored
Changed the pooling values for two structures from strings to specialized enum classes. Many test and operator parsing changes to support this. Introduces one new source file, op_enums.cpp.
-
- 03 Mar, 2022 1 commit
-
-
turneram authored
Add onnx parser and ref and gpu implementations of ONNX op ScatterND
-
- 02 Mar, 2022 2 commits
-
-
Charlie Lin authored
Implements the IsNaN operator, ref, gpu, and onnx parser.
-
bpickrel authored
Update the base version of clang-format from 5.0 to 10.0
-
- 25 Feb, 2022 2 commits
-
-
Paul Fultz II authored
Add with_type to shape class
-
Paul Fultz II authored
wrapped in a any_ptr class so the type can be checked at runtime for a mismatch.
-
- 23 Feb, 2022 1 commit
-
-
Shucai Xiao authored
This PR is the resolve two problems in the issue#999, i.e., non_standard_shape input to reshape and reduce_mean. Three fixes: Any operator that has a standard shape requirement will add a contiguous input for its input. Eliminate_contiguous, when computing whether a contiguous can be removed, we should use all the updated args, not just the one that is being checked. In two optimization in the simplify_reshape, we remove the contiguous in the reshaper name list, since eliminate_contiguous will remove the contiguous if it can be removed. the solution is add an attribute to the operator that requires standard input shape, then in the auto_contiguous pass, add a contiguous to every input of such operators.
-
- 16 Feb, 2022 1 commit
-
-
Umang Yadav authored
Support nonstandard shapes like slice, broadcast and transpose for the unsqueeze op
-
- 09 Feb, 2022 1 commit
-
-
Umang Yadav authored
Support slice, broadcast and transpose shapes for the squeeze op.
-
- 08 Feb, 2022 1 commit
-
-
Paul Fultz II authored
Enforce types to avoid compilation error in pointwise fusions This fixes compile failure: gpt-2, fp16 on Navi
-
- 02 Feb, 2022 1 commit
-
-
Paul Fultz II authored
Currently, MIGRAPHX_TRACE_EVAL=2 prints out the entire output buffer, but this can produce a lot of output. To make it easier to inspect and debug, using MIGRAPHX_TRACE_EVAL=2 now only prints 10 elements from the buffer(the first 5 and last 5) and shows any fp classifications found in the buffer(ie nans, infinity, etc). The previous behavior can still be enabled with MIGRAPHX_TRACE_EVAL=3.
-
- 28 Jan, 2022 1 commit
-
-
Paul Fultz II authored
* Enable auto vectorization * Handle vector types with convert function * Dont vectorize when it will cause problems with preload
-
- 27 Jan, 2022 1 commit
-
-
Umang Yadav authored
allow nonstd shape for the arg ops, non-standard shapes include broadcast, slice and transpose
-
- 17 Jan, 2022 1 commit
-
-
Paul Fultz II authored
Make clip a pointwise op
-
- 08 Dec, 2021 1 commit
-
-
Paul Fultz II authored
-
- 25 Nov, 2021 1 commit
-
-
Shucai Xiao authored
Resolves a problem in parsing the ssd-10 model. The problem is, after inserting contiguous in the auto_contiguous pass, standard output shape of some operators becomes non-standard. Then, if the next operator requires standard input shape, an exception is throw. For example, if we pass the following model: Input (standard shape) -> transpose (transposed) -> softmax (transposed) -> transpose (standard) -> gather. It works fine, and no contiguous is required. In the auto_contiguous pass, a contiguous is inserted after the first transpose. Then we need to replace the first transpose with the contiguous and recompute all shapes. When it comes to the gather operator, its input is a transposed shape, and an exception is thrown. The solution is in the recompute_shape() function. If it is called by the auto_contiguous pass and shape of an instruction is changed, and the shape is non_standard, we do not recompute shape of its output. The reason is: since its output shape is non_standard, a contiguous op will be added after the instruction, which will recompute shape for later operators.
-
- 22 Nov, 2021 1 commit
-
-
kahmed10 authored
Allows --fp16 to be used in the driver to compare the target fp16 result and the ref fp32 result.
-
- 17 Nov, 2021 1 commit
-
-
Paul Fultz II authored
Currently, eliminate_contiguous will never remove contiguous for operators that use module inputs due to the fact that it doesn't pass the module inputs to compute_shape. - Update to pass the module inputs correctly to compute_shape - Fix the overloads of compute_shape so that when passed an empty vector of module inputs it will call the overload without module inputs - Add tests with contiguous and pointwise module function. - Move add_pointwise function to a seperate header to reuse across different tests
-
- 15 Nov, 2021 1 commit
-
-
kahmed10 authored
Currently we have the option of passing in --batch to the driver to change the batch size when the model has a dynamic dim value. We can use this flag to adjust the perf report's rate.
-
- 11 Nov, 2021 1 commit
-
-
Paul Fultz II authored
This enables the pointwise fusions using the MIGRAPHX_ENABLE_POINTWISE_FUSION env variable. Its disabled by default since MIOpen fusions need to be refactored. This also adds a compile_ops pass to compile the pointwise modules. All tests except test_gpu_fast_math passes with MIGRAPHX_ENABLE_POINTWISE_FUSION=1 set.
-
- 05 Nov, 2021 1 commit
-
-
kahmed10 authored
Moving our Docker file from ROCm 4.3 to 4.5 Add Navi base GPUs in to the CI infrastructure
-
- 28 Oct, 2021 1 commit
-
-
Shucai Xiao authored
This PR is the ref implementation of the nonmaxsuppression operator. It always returns the max possible output shape, which is the problem tracked in issue #948.
-
- 20 Oct, 2021 1 commit
-
-
Shucai Xiao authored
Implementation of the roialign operator. For now, we have only the ref implementation. When we run a model on the GPU, we fall back the execution to use the ref implementation.
-
- 19 Oct, 2021 1 commit
-
-
Paul Fultz II authored
Adds a pass to fuse pointwise operators into one "pointwsie" op that has a submodule which does the calculation.
-
- 18 Oct, 2021 1 commit
-
-
Paul Fultz II authored
Enable a cppcheck rule to catch these redundant casts in the future
-
- 15 Oct, 2021 1 commit
-
-
Cagri authored
Added features: This enables wrapping each migraphx operator with rocTX markers. It adds new knob trace to migraphx-driver binary. Limitation: rocTX standalone does not output a file, it needs to be used with rocprof. Example command line: /opt/rocm/bin/rocprof -i ./in.txt --hip-trace --roctx-trace --flush-rate 10ms --timestamp on -d cagri_out --obj-tracking on /opt/rocm/bin/migraphx-driver trace ./resnet50-v2-7.onnx --onnx --gpu Co-authored-by:Shucai Xiao <shucai@gmail.com>
-
- 08 Oct, 2021 2 commits
-
-
Shucai Xiao authored
This PR is for the nonzero operator with static output shape. Co-authored-by:
Paul Fultz II <pfultz2@yahoo.com> Co-authored-by:
mvermeulen <5479696+mvermeulen@users.noreply.github.com>
-
Umang Yadav authored
Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs. Aim is to have the definition of dot operator as C = A . B without having alpha or beta. In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
-
- 01 Oct, 2021 1 commit
-
-
turneram authored
Add multinomial op to onnx parser with ref and GPU implementations. The onnx parser inserts a literal of shape {batch_size, sample_size} with random values in the range [0, 1) and inserts existing ops to compute the cumulative density function. The multinomial operator multiplies the random values by the sum of the CDF and returns the index of the first element of the CDF that is greater than the result, representing samples randomly drawn from [0, class_size) that follow the log-probability distribution. Resolves #821 Co-authored-by:Shucai Xiao <shucai@gmail.com>
-
- 21 Sep, 2021 1 commit
-
-
Paul Fultz II authored
Needed to bypass passes when fusing pointwise operators into a module.
-
- 17 Sep, 2021 2 commits
-
-
Paul Fultz II authored
This reverts commit 9e43cb8b.
-
Umang Yadav authored
This PR aims to remove alpha and beta attributes from dot operator completely. Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs. Aim is to have the definition of dot operator as C = A . B without having alpha or beta. In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
-
- 16 Sep, 2021 1 commit
-
-
Shucai Xiao authored
Add Loop operator for opset version 13. Notes: 1) Default max iteration number is 10 if no max iteration number is provided 2) To change the max iter number, a user can set the max_loop_iterations in the onnx_option struct when parsing a model. 3) The returned shape of the scan output is from the max_loop_iterations even the actual loop num is less than that. This issue also applies to other operators like NonZero and NonMaxSuppression. A issue #948 is created to track this and to be resolved later. Co-authored-by:
Paul <pfultz2@yahoo.com> Co-authored-by:
mvermeulen <5479696+mvermeulen@users.noreply.github.com>
-
- 07 Sep, 2021 2 commits
-
-
Shucai Xiao authored
Add operators, refactor parsers, add rewrite passes, add tests Add ref implementations Move broadcasting of scales and zero points to onnx parser Allow for x and zero_point to have different types in quantizelinear; fix zero_point default type fp16 and fp8 quantization to include subgraph and parameters fix unit test to use qdq operators for int8 quantization Co-authored-by:turneram <alturner@amd.com>
-
Paul Fultz II authored
* Add module pass manage
-