- 09 Nov, 2022 1 commit
-
-
Charlie Lin authored
Co-authored-by:kahmed10 <15948690+kahmed10@users.noreply.github.com>
-
- 07 Nov, 2022 1 commit
-
-
arvindcheru authored
-
- 06 Nov, 2022 1 commit
-
-
Umang Yadav authored
-
- 03 Nov, 2022 1 commit
-
-
Charlie Lin authored
Two input version of the broadcast operator to handle dynamic shapes Added comments to describe the versions of the broadcast operator Dynamic broadcast only handles broadcasting a static 1D shape tensor into the other input shape
-
- 02 Nov, 2022 2 commits
-
-
Paul Fultz II authored
Can be enabled via environment variable MIGRAPHX_ENABLE_NHWC
-
Paul Fultz II authored
-
- 01 Nov, 2022 6 commits
-
-
charlie authored
Trying to keep the PRs separate
-
charlie authored
-
Ted Themistokleous authored
Newer split moves the split attribute to an input. In this case we check the number of input args then.
-
charlie authored
-
charlie authored
-
Torsten Keßler authored
-
- 31 Oct, 2022 6 commits
-
-
charlie authored
It doesn't look like this functionality is used. Probably was a thing in ONNX that became deprecated when they adopted NumPy-like broadcasting.
-
charlie authored
-
charlie authored
-
charlie authored
change to shape::operator== makes it so this will work the same
-
charlie authored
-
charlie authored
-
- 28 Oct, 2022 2 commits
-
-
Umang Yadav authored
Local Threads of multiples 32 were introduced in #1348 But LocalThreads that are not multiple of 64 are causing correctness issues.
-
charlie authored
-
- 27 Oct, 2022 9 commits
-
-
charlie authored
-
Chris Austen authored
Upgraded Dockerfiles and fixed tidy issues to make Ubuntu 20.04 and ROCm 5.3.0 the default
-
charlie authored
-
charlie authored
-
charlie authored
-
kahmed10 authored
updated GPU pad to now use JIT version. added range functions for JIT kernels.
-
charlie authored
-
charlie authored
-
charlie authored
-
- 26 Oct, 2022 5 commits
-
-
charlie authored
-
Brian Pickrell authored
Fixes an observed regression error on certain Frozen Protobuf models due to PR 1280
-
kahmed10 authored
use_dynamic_same_auto_pad was removed from convolution, but the driver models still retain the fields. This PR regenerates the files so that they are compatible again.
-
charlie authored
-
charlie authored
-
- 24 Oct, 2022 2 commits
-
-
charlie authored
-
jungpark-mlir authored
Reiterate the assertion on the standard shape but relax it for the multibroadcast ops deliberately inserted to explicit the broadcast.
-
- 20 Oct, 2022 2 commits
- 19 Oct, 2022 2 commits
-
-
charlie authored
-
Charlie Lin authored
Refactor dynamic compute - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to use dyn_output object Dynamic ref unary functions - Included these changes to have an example of the refactored dynamic compute being used - Changes to unary base class to handle dynamic shapes - Changed elu and leaky_relu to use unary base class and pointwise JIT
-