- 07 Nov, 2022 3 commits
-
-
umangyadav authored
-
Umang Yadav authored
-
Umang Yadav authored
* free up more github runner space * upgrade versions
-
- 06 Nov, 2022 1 commit
-
-
Umang Yadav authored
-
- 05 Nov, 2022 5 commits
-
-
umangyadav authored
-
umangyadav authored
-
Umang Yadav authored
-
Umang Yadav authored
Co-authored-by:kahmed10 <15948690+kahmed10@users.noreply.github.com>
-
Umang Yadav authored
Co-authored-by:kahmed10 <15948690+kahmed10@users.noreply.github.com>
-
- 02 Nov, 2022 3 commits
-
-
Paul Fultz II authored
Can be enabled via environment variable MIGRAPHX_ENABLE_NHWC
-
Paul Fultz II authored
-
Ted Themistokleous authored
Allows for a model to be converted to the same opset but turn on infer_shapes through onnx. This allows us to get an idea of what should be valid for the output of nodes in a network. Usecase: python3 tools/convert_onnx_version.py --model <model_name> --opset=<same_as_model> --infer_shapes --output <new_model_name>
-
- 01 Nov, 2022 2 commits
-
-
Ted Themistokleous authored
Newer split moves the split attribute to an input. In this case we check the number of input args then.
-
Torsten Keßler authored
-
- 31 Oct, 2022 12 commits
-
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
-
umangyadav authored
(cherry picked from commit 46898ea4e6fde58b279778a557b44441827512ab)
-
kahmed10 authored
-
- 28 Oct, 2022 1 commit
-
-
Umang Yadav authored
Local Threads of multiples 32 were introduced in #1348 But LocalThreads that are not multiple of 64 are causing correctness issues.
-
- 27 Oct, 2022 2 commits
-
-
Chris Austen authored
Upgraded Dockerfiles and fixed tidy issues to make Ubuntu 20.04 and ROCm 5.3.0 the default
-
kahmed10 authored
updated GPU pad to now use JIT version. added range functions for JIT kernels.
-
- 26 Oct, 2022 2 commits
-
-
Brian Pickrell authored
Fixes an observed regression error on certain Frozen Protobuf models due to PR 1280
-
kahmed10 authored
use_dynamic_same_auto_pad was removed from convolution, but the driver models still retain the fields. This PR regenerates the files so that they are compatible again.
-
- 25 Oct, 2022 1 commit
-
-
Chris Austen authored
-
- 24 Oct, 2022 1 commit
-
-
jungpark-mlir authored
Reiterate the assertion on the standard shape but relax it for the multibroadcast ops deliberately inserted to explicit the broadcast.
-
- 21 Oct, 2022 1 commit
-
-
Umang Yadav authored
-
- 19 Oct, 2022 2 commits
-
-
Charlie Lin authored
Refactor dynamic compute - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to use dyn_output object Dynamic ref unary functions - Included these changes to have an example of the refactored dynamic compute being used - Changes to unary base class to handle dynamic shapes - Changed elu and leaky_relu to use unary base class and pointwise JIT
-
Umang Yadav authored
* use find2.0 for the convolution Co-authored-by:
Vasilii Filippov <DrizztDoUrden@users.noreply.github.com> Co-authored-by:
Chris Austen <causten@users.noreply.github.com>
-
- 18 Oct, 2022 1 commit
-
-
Paul Fultz II authored
* Enable non-standard shape * Use perfdb for non xdlops * Fix transpose+broadcast strides Co-authored-by:jungpark-mlir <jungwook.park@amd.com>
-
- 17 Oct, 2022 1 commit
-
-
Umang Yadav authored
hipMemset is causing random failure. hipMemsetAsync is doing the correct synchronization.
-
- 14 Oct, 2022 1 commit
-
-
Charlie Lin authored
Allows for rank 2 tensors into batchnorm. Specifically when spatial dimensions are all 1 and removed
-
- 13 Oct, 2022 1 commit
-
-
Charlie Lin authored
Removes use_dynamic_same_auto_pad Change padding_mode to be used for dynamic padding Move compute_padded_shape to pad_calc.cpp as it will be used in other dynamic padding cases Fix same_lower compute_padded_shape bug and add a test.
-