"include/vscode:/vscode.git/clone" did not exist on "1dd455d6337cf7ed3ec6cb82a1618701a5c17351"
- 15 Feb, 2023 1 commit
-
-
Brian Pickrell authored
Add dynamic shape support to slice operator. First draft of this feature doesn't support ops slicing non-fixed, dynamic axes. Resulting shape in such cases is not guaranteed.* Also, onnx parsing doesn't support any arguments other than "axes".
-
- 14 Feb, 2023 2 commits
-
-
Charlie Lin authored
Expands on the documentation and corrects default option documentation error.
-
Paul Fultz II authored
* Add serialization of tuples and optional types
-
- 11 Feb, 2023 1 commit
-
-
Brian Pickrell authored
* add dynamic shape support to concat operator. Includes new op_shape_test and ref_ops_test cases
-
- 10 Feb, 2023 1 commit
-
-
Brian Pickrell authored
dyn shape support for Where operator. Includes shape test, ref_ops test, onx_test.
-
- 06 Feb, 2023 1 commit
-
-
Paul Fultz II authored
* Fuse layernorm with different patterns * Only match when using the last axis Co-authored-by:
kahmed10 <15948690+kahmed10@users.noreply.github.com> Co-authored-by:
kahmed10 <15948690+kahmed10@users.noreply.github.com>
-
- 03 Feb, 2023 2 commits
-
-
Paul Fultz II authored
Refactors memory coloring to only handle allocation instructions. It also handles allocations for tuple shapes.
-
Brian Pickrell authored
* Implement dynamic shapes for scatterND operators.
-
- 02 Feb, 2023 1 commit
-
-
Brian Pickrell authored
Dynamic shape support for gathernd op.
-
- 31 Jan, 2023 2 commits
-
-
Chris Austen authored
upgrade to ROCm 5.4.2 in CI
-
Paul Fultz II authored
* Add general optimize pass * Fuse gemm multiplies by scalar * Handle zero epsilon
-
- 30 Jan, 2023 1 commit
-
-
Brian Pickrell authored
Dynamic shape support for gather op.
-
- 17 Jan, 2023 2 commits
-
-
Charlie Lin authored
Extends reshape to handle the case of a single non-fixed dynamic_dimension
-
Charlie Lin authored
Extends pad operator to handle dynamic input shapes Only handles computing the shape for adding constant padding to a dynamic shape - adds the padding to the min, max, and opt values (unless opt is 0, where it keeps it 0) - does not handle reflect padding with dynamic shapes
-
- 04 Jan, 2023 1 commit
-
-
Brian Pickrell authored
Implements dynamic shapes in reduce_op and all its child operator classes (reduce_max etc.)
-
- 14 Dec, 2022 1 commit
-
-
Paul Fultz II authored
* Print python code
-
- 13 Dec, 2022 2 commits
-
-
kahmed10 authored
-
Charlie Lin authored
Implements the operator==(dynamic_dimension, size_t) functions
-
- 08 Dec, 2022 4 commits
-
-
Charlie Lin authored
Extends dot MIGX operator to handle dynamic input shapes Only allow dot between two dynamic shapes that have exactly matching outer dimensions Inner dimensions must also match correspondingly Updates dot related tests Change check_shapes to use shape.ndim() ONNX parsers for GEMM and MatMult will be updated in a separate PR
-
Charlie Lin authored
No major changes required, use dyn_output and pass dynamic shape when calling compute_shape() Adds dynamic shape tests
-
Charlie Lin authored
Changes flatten's compute_shape() to handle dynamic shapes Calculates the flattened shape with the min, max, and opt
-
shivadbhavsar authored
Currently, quantizing a program with rnn layers to fp16 results in segmentation faults due to a "convert" operation being applied to an "undefined" instruction. The following changes are implemented to fix this issue: Added is_undefined method to the instruction class that returns true if all inputs to the instruction are from an undefined op. Updated rewrite_rnn pass to use the new is_undefined method rather than checking ins->name() Updated the dead_code_elimination pass to also use this new method rather than only checking the instruction name
-
- 07 Dec, 2022 1 commit
-
-
Charlie Lin authored
Extends the Argmax operator to handle dynamic input shapes. Only shape function changes
-
- 06 Dec, 2022 1 commit
-
-
Charlie Lin authored
Extends unsqueeze and squeeze to work for dynamic input shapes Does not handle the steps parameter Adds some additional negative axes shape tests
-
- 02 Dec, 2022 2 commits
-
-
Charlie Lin authored
Fix problem with the contiguous operator constructing non-standard shape literals. A non-standard literal will almost never be used, since a literal is known at compile time. Added some comments on the intended behavior: - literal{shape, vector} constructor with a non-standard shape is intended to keep the same ordering as the given vector. The data buffer will be populated such that when the non-standard indexing is used the original order is as given. - literal{shape, argument} constructor directly copies the data buffer from the argument - Changed non-standard literal fill() to use tensor_view iterators as it handles non-standard shapes now - Changed the contiguous ref_ops_test to be more helpful -
Charlie Lin authored
Extends the pooling operators for dynamic shape inputs AveragePooling GlobalAveragePooling MaxPooling GlobalMaxPooling LpNormPooling GlobalLpNormPooling y.github.com>
-
- 28 Nov, 2022 1 commit
-
-
Charlie Lin authored
Extends ref transpose operator for dynamic shapes Make dynamic tests more consistent naming
-
- 17 Nov, 2022 1 commit
-
-
Charlie Lin authored
Extends the ref contiguous operator to handle dynamic shapes Updates the eliminate_contiguous pass to use the dyn_output struct
-
- 13 Nov, 2022 1 commit
-
-
Charlie Lin authored
Updated Multibroadcast op to have a two input version for dynamic shapes Current dynamic shape broadcasting logic dynamic_dimensions must be the same or one of them is {1, 1, 0} or {1, 1, 1} Works for dyn-dyn, dyn-static, and static-static shape combinations Changed common.cpp for multibroadcasting for binary ops with dynamic shapes Extended binary.hpp for dynamic shapes to test the new common.cpp stuff
-
- 02 Nov, 2022 1 commit
-
-
Paul Fultz II authored
Can be enabled via environment variable MIGRAPHX_ENABLE_NHWC
-
- 27 Oct, 2022 1 commit
-
-
Chris Austen authored
Upgraded Dockerfiles and fixed tidy issues to make Ubuntu 20.04 and ROCm 5.3.0 the default
-
- 19 Oct, 2022 2 commits
-
-
Charlie Lin authored
Refactor dynamic compute - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to use dyn_output object Dynamic ref unary functions - Included these changes to have an example of the refactored dynamic compute being used - Changes to unary base class to handle dynamic shapes - Changed elu and leaky_relu to use unary base class and pointwise JIT
-
Umang Yadav authored
* use find2.0 for the convolution Co-authored-by:
Vasilii Filippov <DrizztDoUrden@users.noreply.github.com> Co-authored-by:
Chris Austen <causten@users.noreply.github.com>
-
- 13 Oct, 2022 2 commits
-
-
Charlie Lin authored
Removes use_dynamic_same_auto_pad Change padding_mode to be used for dynamic padding Move compute_padded_shape to pad_calc.cpp as it will be used in other dynamic padding cases Fix same_lower compute_padded_shape bug and add a test.
-
Charlie Lin authored
Rewrites the TF batch norm like operators to other MIGX operators Removes the code related to batch_norm_inference
-
- 04 Oct, 2022 1 commit
-
-
Ted Themistokleous authored
Stream sync changes and associated API level changes
-
- 29 Sep, 2022 1 commit
-
-
Umang Yadav authored
Improvements/Additions to be made: changes for the quant_convolution, changes for the deconvolution, Macros for MIOpen status checks
-
- 27 Sep, 2022 1 commit
-
-
Ted Themistokleous authored
Implement operator for CPU and GPU implementations
-
- 21 Sep, 2022 1 commit
-
-
kahmed10 authored
This PR allows for other values of epsilon to be matched when finding layernorm. Similarly, the calculation now uses the variable for epsilon.
-
- 08 Sep, 2022 1 commit
-
-
Paul Fultz II authored
* Remove unused headers
-