"tools/git@developer.sourcefind.cn:gaoqiong/migraphx.git" did not exist on "866cca5be094f295b6de55e5dc6728a626abf7ba"
- 16 Sep, 2023 1 commit
-
-
Charlie Lin authored
Implements a fill operator that sets the values in an output buffer to a given value Will be used when parsing ONNX ConstantOfShape Can also be used when a buffer needs to be filled with a value that is determined at runtime
-
- 14 Sep, 2023 1 commit
-
-
Brian Pickrell authored
New op that populates a shape with random numbers with a uniform distribution. The rand_uniform op. can implement the Onnx RandomUniform instruction, and can also create the random number sequence necessary to implement Multinomial. (At this time, our Onnx Multinomial parsing generates a random sequence of numbers when parsing as a workaround, so that the resulting program uses the same "random" set every time.) Arguments: shape, seed. Shape is required; can be static or dynamic. Seed is still optional in this version. If it's not given at inference time, use the value in the creation attribute seed. Update: deleted A boolean use_auto_seed causes any given seed to be ignored.
-
- 10 Sep, 2023 1 commit
-
-
Charlie Lin authored
Makes a version of allocate that takes in dimensions and allocates a buffer Going to create a simplify_dynamic_ops compiler pass that will use the use_shape_attr flag The ONNX op ConstantOfShape needs the buffer to be filled with a specific value, so going to make another PR for that and a fill operator
-
- 29 Aug, 2023 1 commit
-
-
Brian Pickrell authored
Adds support for dynamic input shape in pooling operator along with auto-padding. This combination requires that the padding (and therefore the output shape) can't be computed until runtime.
-
- 18 Aug, 2023 2 commits
-
-
Paul Fultz II authored
-
Charlie Lin authored
Allows slice to work with variable starts, ends, and axes input Outputs a dynamic shape even with a static shape data input when the starts and ends are variable
-
- 31 Jul, 2023 1 commit
-
-
Lakhinder Walia authored
* Use shape of Instruction (instead of a default) in add_return() * Instruction validation fix: not to use a default shape value for comparison * Fix instruction::replace() to recompute shape for "@return" * handle the case of missing shape in an Instruction related Test * use compute_shape() to get op shapes + test case for tuple_type * add test case shape_test/return_shape_tuple * Add test for @return to check for half type * Move @return unit-tests around..; Address review comments * Broken comparison fix: comparison to a (default) shape of tuple_type * Test cases: (add) return_shape_empty & (modify) return_shape_tuple * modify the assert() statement
-
- 25 Jul, 2023 1 commit
-
-
Brian Pickrell authored
* Add dynamic input to prefix_scan_op * Added a shape test. The op should return the same dynamic shape as the input. * add 2d shape test for prefix_scan
-
- 23 Jul, 2023 1 commit
-
-
Charlie Lin authored
-
- 13 Jul, 2023 1 commit
-
-
Charlie Lin authored
Renames deconvolution -> convolution_backwards to be more consistent with the literature Note: this is not the cross-correlation operator (which is the adjoint of convolution). This is technically a standard convolution operator combined with an upsampling operator rather than a downsampling operator. Adds unit tests for the padding, strides, dilations, and other op attributes. Throws on auto_pad attribute since it has not been implemented Previously it read the attribute and set it but then did nothing with it Extended for dynamic shapes Does not support using asymmetric padding (padding_L != padding_R) and output_shape with dynamic shapes.
-
- 16 Jun, 2023 1 commit
-
-
Charlie Lin authored
* initial * Added tests and new functionality * Update optimals handling * Simplify conditionals * Ref test, update docs * Remove comment, suggestion unclear --------- Co-authored-by:Umang Yadav <29876643+umangyadav@users.noreply.github.com>
-
- 15 Jun, 2023 1 commit
-
-
Brian Pickrell authored
* fix parse_instancenorm to create broadcast and multibroadcast instructions with two dynamic shape arguments instead of 1. Their make_op() functions don't support dynamic shapes when called with one input. This caused an error when parsing an ONNX 3duunet model * Use add_common_op() to create multibroadcast op. * add verification and parsing test for instance_norm with dynamic input. Parse test doesn't pass. * fix for test; still doesn't pass * another fix for test; still doesn't pass * work in progress, instance_norm_dyn_batch_test works but instance_norm_test doesn't * fix onnx instancenorm tests to match parser changes. Passes all check tests * Updated comments explaining usage of add_common_op() * hand-merged conflicts with develop * fix instance_norm_half_test after merge * add Onnx test instance_norm_dyn_batch_half_test * add shape test cases broadcast_1in_dyn_error and multibroadcast_1in_dyn_error_0
-
- 12 Jun, 2023 1 commit
-
-
Paul Fultz II authored
-
- 17 May, 2023 1 commit
-
-
shivadbhavsar authored
Adding support for broadcasted scalars to unsqueeze op. Specifying steps other than 1 is disallowed in this implementation since we want the output the always be a tensor. We can support varying step sizes if we allow a broadcasted scalar output from this op.
-
- 18 Apr, 2023 1 commit
-
-
Ted Themistokleous authored
Ensure that we don't have empty inputs when computing shape for pointwise function
-
- 07 Apr, 2023 1 commit
-
-
Paul Fultz II authored
Converts can be inserted when the scales and input differ in the onnx file(we are already doing this implicit conversion in the ref implementation). This will also improve the compile-time of quantizelinear.hpp since we can remove the nested visit method.
-
- 04 Apr, 2023 1 commit
-
-
Charlie Lin authored
Makes the optimals into a std::set<std::size_t> Changes shape object functions to handle the opts change Changes to convolution, flatten, pooling, and convolution in that they no longer calculate the output optimal dimensions. Instead returns empty opts. Will need to change this in the future if we want to support dynamic shapes fully. Many changes to tests and shape calls with respect to the new optimals
-
- 03 Apr, 2023 1 commit
-
-
shivadbhavsar authored
-
- 28 Feb, 2023 1 commit
-
-
Charlie Lin authored
Creates the select_module operator that selects one of the submodules passed to it to run based on the submodule parameters. The submodule is selected by having the exact same static shapes for the arguments to select_module as the parameters in the submodule
-
- 15 Feb, 2023 1 commit
-
-
Brian Pickrell authored
Add dynamic shape support to slice operator. First draft of this feature doesn't support ops slicing non-fixed, dynamic axes. Resulting shape in such cases is not guaranteed.* Also, onnx parsing doesn't support any arguments other than "axes".
-
- 11 Feb, 2023 1 commit
-
-
Brian Pickrell authored
* add dynamic shape support to concat operator. Includes new op_shape_test and ref_ops_test cases
-
- 10 Feb, 2023 1 commit
-
-
Brian Pickrell authored
dyn shape support for Where operator. Includes shape test, ref_ops test, onx_test.
-
- 03 Feb, 2023 1 commit
-
-
Brian Pickrell authored
* Implement dynamic shapes for scatterND operators.
-
- 02 Feb, 2023 1 commit
-
-
Brian Pickrell authored
Dynamic shape support for gathernd op.
-
- 30 Jan, 2023 1 commit
-
-
Brian Pickrell authored
Dynamic shape support for gather op.
-
- 17 Jan, 2023 2 commits
-
-
Charlie Lin authored
Extends reshape to handle the case of a single non-fixed dynamic_dimension
-
Charlie Lin authored
Extends pad operator to handle dynamic input shapes Only handles computing the shape for adding constant padding to a dynamic shape - adds the padding to the min, max, and opt values (unless opt is 0, where it keeps it 0) - does not handle reflect padding with dynamic shapes
-
- 04 Jan, 2023 1 commit
-
-
Brian Pickrell authored
Implements dynamic shapes in reduce_op and all its child operator classes (reduce_max etc.)
-
- 08 Dec, 2022 3 commits
-
-
Charlie Lin authored
Extends dot MIGX operator to handle dynamic input shapes Only allow dot between two dynamic shapes that have exactly matching outer dimensions Inner dimensions must also match correspondingly Updates dot related tests Change check_shapes to use shape.ndim() ONNX parsers for GEMM and MatMult will be updated in a separate PR
-
Charlie Lin authored
No major changes required, use dyn_output and pass dynamic shape when calling compute_shape() Adds dynamic shape tests
-
Charlie Lin authored
Changes flatten's compute_shape() to handle dynamic shapes Calculates the flattened shape with the min, max, and opt
-
- 07 Dec, 2022 1 commit
-
-
Charlie Lin authored
Extends the Argmax operator to handle dynamic input shapes. Only shape function changes
-
- 06 Dec, 2022 1 commit
-
-
Charlie Lin authored
Extends unsqueeze and squeeze to work for dynamic input shapes Does not handle the steps parameter Adds some additional negative axes shape tests
-
- 02 Dec, 2022 1 commit
-
-
Charlie Lin authored
Extends the pooling operators for dynamic shape inputs AveragePooling GlobalAveragePooling MaxPooling GlobalMaxPooling LpNormPooling GlobalLpNormPooling y.github.com>
-
- 28 Nov, 2022 1 commit
-
-
Charlie Lin authored
Extends ref transpose operator for dynamic shapes Make dynamic tests more consistent naming
-
- 17 Nov, 2022 1 commit
-
-
Charlie Lin authored
Extends the ref contiguous operator to handle dynamic shapes Updates the eliminate_contiguous pass to use the dyn_output struct
-
- 13 Nov, 2022 1 commit
-
-
Charlie Lin authored
Updated Multibroadcast op to have a two input version for dynamic shapes Current dynamic shape broadcasting logic dynamic_dimensions must be the same or one of them is {1, 1, 0} or {1, 1, 1} Works for dyn-dyn, dyn-static, and static-static shape combinations Changed common.cpp for multibroadcasting for binary ops with dynamic shapes Extended binary.hpp for dynamic shapes to test the new common.cpp stuff
-
- 13 Oct, 2022 2 commits
-
-
Charlie Lin authored
Removes use_dynamic_same_auto_pad Change padding_mode to be used for dynamic padding Move compute_padded_shape to pad_calc.cpp as it will be used in other dynamic padding cases Fix same_lower compute_padded_shape bug and add a test.
-
Charlie Lin authored
Rewrites the TF batch norm like operators to other MIGX operators Removes the code related to batch_norm_inference
-
- 23 Aug, 2022 1 commit
-
-
Charlie Lin authored
Has NMS op output a dynamic shape (ONNX spec behavior) Allows for dynamic input shape to NMS op
-