- 13 Nov, 2022 1 commit
-
-
Charlie Lin authored
Updated Multibroadcast op to have a two input version for dynamic shapes Current dynamic shape broadcasting logic dynamic_dimensions must be the same or one of them is {1, 1, 0} or {1, 1, 1} Works for dyn-dyn, dyn-static, and static-static shape combinations Changed common.cpp for multibroadcasting for binary ops with dynamic shapes Extended binary.hpp for dynamic shapes to test the new common.cpp stuff
-
- 07 Nov, 2022 2 commits
-
-
arvindcheru authored
-
Umang Yadav authored
* free up more github runner space * upgrade versions
-
- 06 Nov, 2022 1 commit
-
-
Umang Yadav authored
-
- 02 Nov, 2022 3 commits
-
-
Paul Fultz II authored
Can be enabled via environment variable MIGRAPHX_ENABLE_NHWC
-
Paul Fultz II authored
-
Ted Themistokleous authored
Allows for a model to be converted to the same opset but turn on infer_shapes through onnx. This allows us to get an idea of what should be valid for the output of nodes in a network. Usecase: python3 tools/convert_onnx_version.py --model <model_name> --opset=<same_as_model> --infer_shapes --output <new_model_name>
-
- 01 Nov, 2022 2 commits
-
-
Ted Themistokleous authored
Newer split moves the split attribute to an input. In this case we check the number of input args then.
-
Torsten Keßler authored
-
- 31 Oct, 2022 1 commit
-
-
kahmed10 authored
-
- 28 Oct, 2022 1 commit
-
-
Umang Yadav authored
Local Threads of multiples 32 were introduced in #1348 But LocalThreads that are not multiple of 64 are causing correctness issues.
-
- 27 Oct, 2022 2 commits
-
-
Chris Austen authored
Upgraded Dockerfiles and fixed tidy issues to make Ubuntu 20.04 and ROCm 5.3.0 the default
-
kahmed10 authored
updated GPU pad to now use JIT version. added range functions for JIT kernels.
-
- 26 Oct, 2022 2 commits
-
-
Brian Pickrell authored
Fixes an observed regression error on certain Frozen Protobuf models due to PR 1280
-
kahmed10 authored
use_dynamic_same_auto_pad was removed from convolution, but the driver models still retain the fields. This PR regenerates the files so that they are compatible again.
-
- 25 Oct, 2022 1 commit
-
-
Chris Austen authored
-
- 24 Oct, 2022 1 commit
-
-
jungpark-mlir authored
Reiterate the assertion on the standard shape but relax it for the multibroadcast ops deliberately inserted to explicit the broadcast.
-
- 21 Oct, 2022 1 commit
-
-
Umang Yadav authored
-
- 19 Oct, 2022 2 commits
-
-
Charlie Lin authored
Refactor dynamic compute - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to use dyn_output object Dynamic ref unary functions - Included these changes to have an example of the refactored dynamic compute being used - Changes to unary base class to handle dynamic shapes - Changed elu and leaky_relu to use unary base class and pointwise JIT
-
Umang Yadav authored
* use find2.0 for the convolution Co-authored-by:
Vasilii Filippov <DrizztDoUrden@users.noreply.github.com> Co-authored-by:
Chris Austen <causten@users.noreply.github.com>
-
- 18 Oct, 2022 1 commit
-
-
Paul Fultz II authored
* Enable non-standard shape * Use perfdb for non xdlops * Fix transpose+broadcast strides Co-authored-by:jungpark-mlir <jungwook.park@amd.com>
-
- 17 Oct, 2022 1 commit
-
-
Umang Yadav authored
hipMemset is causing random failure. hipMemsetAsync is doing the correct synchronization.
-
- 14 Oct, 2022 1 commit
-
-
Charlie Lin authored
Allows for rank 2 tensors into batchnorm. Specifically when spatial dimensions are all 1 and removed
-
- 13 Oct, 2022 2 commits
-
-
Charlie Lin authored
Removes use_dynamic_same_auto_pad Change padding_mode to be used for dynamic padding Move compute_padded_shape to pad_calc.cpp as it will be used in other dynamic padding cases Fix same_lower compute_padded_shape bug and add a test.
-
Charlie Lin authored
Rewrites the TF batch norm like operators to other MIGX operators Removes the code related to batch_norm_inference
-
- 07 Oct, 2022 1 commit
-
-
Ted Themistokleous authored
Simplified algebraic operations (x*1), x*(-1), x/1, 0+x & x+0, x-0, 0-x, 0*x, x*0, and 0/x operations
-
- 04 Oct, 2022 2 commits
-
-
Ted Themistokleous authored
Stream sync changes and associated API level changes
-
Paul Fultz II authored
optimize the softmax operator
-
- 03 Oct, 2022 1 commit
-
-
Umang Yadav authored
Adds two methods for the custom_ops virtual class. bool runs_on_offload_target(), if the custom op runs directly on the gpu then it should be set to true. in this case, custom op expects its parameters to reside in GPU memory and writes output to the GPU memory. If it is set to false then, custom op expects it's parameter to reside on the host and puts back the result into the host memory. output_alias, if output of the custom op is aliasing the input buffer. i.e. interpreting the same input buffer with differnet shape and strides. Update as_vector() in C++ API to handle non-standard shapes. It required exposing element_index to space_index conversion method for the shape class.
-
- 29 Sep, 2022 2 commits
-
-
Umang Yadav authored
Improvements/Additions to be made: changes for the quant_convolution, changes for the deconvolution, Macros for MIOpen status checks
-
Paul Fultz II authored
* Fix invalid program from find_splits
-
- 28 Sep, 2022 1 commit
-
-
Umang Yadav authored
test_gpu_pack_int8_args fails on gfx908 machine, because it doesn't set compute_fp32 flag correctly. This PR fixes the test such that it checks for the device-name, and rocblas-versions and sets this flag accordingly.
-
- 27 Sep, 2022 1 commit
-
-
Ted Themistokleous authored
Implement operator for CPU and GPU implementations
-
- 26 Sep, 2022 3 commits
-
-
Charlie Lin authored
Rewrites the BatchNormalization ONNX operator into other MIGX operators - Added handling of 1D input tensor case (edge case in ONNX spec) Removes the spatial and per_activation functionality (not in the ONNX spec) - Did not remove the batch_norm_inference related code as the TensorFlow parser still uses it - Can remove that code when the TF version is updated
-
Paul Fultz II authored
-
Paul Fultz II authored
Upgrade cppcheck to 2.9
-
- 24 Sep, 2022 2 commits
-
-
Chris Austen authored
Workflow has concurrency reintroduced with different set of rules. New expected behavior is to check concurrency on PR level with one running and one pending performance tests. In case of multiple commits in same PR, always the latest commit is queued after initiated performance test execution is completed. Any other PRs/commits are in pending/queued state
-
Chris Austen authored
Codecov announced deprecating the bash uploader. Using updated uploader
-
- 23 Sep, 2022 1 commit
-
-
Paul Fultz II authored
* Remove device functions * Update tests
-
- 21 Sep, 2022 1 commit
-
-
kahmed10 authored
This PR allows for other values of epsilon to be matched when finding layernorm. Similarly, the calculation now uses the variable for epsilon.
-