- 28 Oct, 2022 2 commits
-
-
Ted Themistokleous authored
-
Umang Yadav authored
-
- 27 Oct, 2022 3 commits
-
-
Chris Austen authored
Upgraded Dockerfiles and fixed tidy issues to make Ubuntu 20.04 and ROCm 5.3.0 the default
-
Ted Themistokleous authored
- Added cases to handle error cases of invalid splits for split-11 and split-13 - should solve issue with code coverage.
-
kahmed10 authored
updated GPU pad to now use JIT version. added range functions for JIT kernels.
-
- 26 Oct, 2022 3 commits
-
-
Ted Themistokleous authored
Newer split moves the split attribute to an input. In this case we check the number of input args then. This changes allows us to move the accumulte check to outside each conditional branch Added more debug on this as well to show more details on this failure mode. Adds an additional test case using a generated split-13 operator styled test.
-
Brian Pickrell authored
Fixes an observed regression error on certain Frozen Protobuf models due to PR 1280
-
kahmed10 authored
use_dynamic_same_auto_pad was removed from convolution, but the driver models still retain the fields. This PR regenerates the files so that they are compatible again.
-
- 25 Oct, 2022 1 commit
-
-
Chris Austen authored
-
- 24 Oct, 2022 1 commit
-
-
jungpark-mlir authored
Reiterate the assertion on the standard shape but relax it for the multibroadcast ops deliberately inserted to explicit the broadcast.
-
- 21 Oct, 2022 3 commits
-
-
Ted Themistokleous authored
-
Ted Themistokleous authored
- Initial fix to handle scalars on input for empty constant values - Using scalar, multibroadcast, contiguous - Fixed appropriate unit tests for simple single output constants - Added unit tests for multi if outputs. - TODO - multibroadcast to handle scalar so we don't use scalar
-
Umang Yadav authored
-
- 20 Oct, 2022 1 commit
-
-
Ted Themistokleous authored
Adding test to validate what a "valid" multi input should look like and that we correctly handle trailing 1s and correctly sized outputs. Generated and added the two tests from gen_onnx.py with matching test in onnx_test.cpp -if_then_else_multi_output_shapes_test.onnx -if_then_else_multi_output_shapes_test2.onnx
-
- 19 Oct, 2022 2 commits
-
-
Charlie Lin authored
Refactor dynamic compute - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to use dyn_output object Dynamic ref unary functions - Included these changes to have an example of the refactored dynamic compute being used - Changes to unary base class to handle dynamic shapes - Changed elu and leaky_relu to use unary base class and pointwise JIT
-
Umang Yadav authored
* use find2.0 for the convolution Co-authored-by:
Vasilii Filippov <DrizztDoUrden@users.noreply.github.com> Co-authored-by:
Chris Austen <causten@users.noreply.github.com>
-
- 18 Oct, 2022 6 commits
-
-
Paul Fultz II authored
* Enable non-standard shape * Use perfdb for non xdlops * Fix transpose+broadcast strides Co-authored-by:jungpark-mlir <jungwook.park@amd.com>
-
Ted Themistokleous authored
-
Ted Themistokleous authored
outline seems to be bugged, and going to be depreciated, thus switching to creating a new literal with the proper shape for the empty output branch.
-
Ted Themistokleous authored
-make all_but_last_dims_equal func instead of lambda -rename dim_delta -> rank_delta -make unsqueeze_last_op func instead of lambda -Handle multi output cases of changing output instructions -capture shape at each output shape at start of loop via .at() operator -replace instances of && with and
-
Ted Themistokleous authored
This test should error out as, having different output shapes for each branch with one non empty is invalid. Changes to gen_onnx.py as well as generated onnx file provided
-
Ted Themistokleous authored
Add the testcase and onnx file generated to handle the case of two output shapes that vary in rank by one, with a trailing 1 but sub lengths are not equivalent
-
- 17 Oct, 2022 5 commits
-
-
Ted Themistokleous authored
This should be flagged as an error always for static inputs
-
Ted Themistokleous authored
- added changes to gen_onnx.py for different type, shape, and incompatible lens - added test cases that should except in onnx_test.cpp
-
Ted Themistokleous authored
- gen_onnx.py changes for onnx output of empty const input branches (seen in resnext50) - updated onnx_test.cpp to validate parsing of input. - new onnx files generated from onnx tests
-
Umang Yadav authored
hipMemset is causing random failure. hipMemsetAsync is doing the correct synchronization.
-
Ted Themistokleous authored
- Handle checks for each IF output - add const to inputs of all_but_last_dims_equal - add std::equal instead of using equal - Use .back() for vectors in getting last value - Use input().front() instead of prev(prev()) when replacing the last value.
-
- 15 Oct, 2022 1 commit
-
-
Ted Themistokleous authored
-
- 14 Oct, 2022 7 commits
-
-
Ted Themistokleous authored
-
Ted Themistokleous authored
-
Ted Themistokleous authored
Check which shape is larger and adjust the check using equals so we never have the option of going out of range with different sizes outside of the smaller vector.
-
Ted Themistokleous authored
- rename then/else_shapes -> then/else_lens - change assert for shape size to reusing throw() - return lengths in handle_empty_branch - use handle_empty_branch return value for lens update - use std::prev() instead of predecriment operator for modifying nodes
-
Charlie Lin authored
Allows for rank 2 tensors into batchnorm. Specifically when spatial dimensions are all 1 and removed
-
Umang Yadav authored
-
Ted Themistokleous authored
Outline of an empty branch is correct typically seen if we parse in an empty constant for the other side of the if then/else onnx logic. Need to do this to match shape even though the other branch is just empty.
-
- 13 Oct, 2022 5 commits
-
-
Ted Themistokleous authored
Just clean up the logic on this so we're not 5 levels deep on ifs.
-
Charlie Lin authored
Removes use_dynamic_same_auto_pad Change padding_mode to be used for dynamic padding Move compute_padded_shape to pad_calc.cpp as it will be used in other dynamic padding cases Fix same_lower compute_padded_shape bug and add a test.
-
Ted Themistokleous authored
- use empty() instead of size() == 0 for checking each condition - Make each branch a lambda instead of repeating code.
-
Charlie Lin authored
Rewrites the TF batch norm like operators to other MIGX operators Removes the code related to batch_norm_inference
-
Ted Themistokleous authored
-