1. 31 Oct, 2023 1 commit
  2. 30 Oct, 2023 1 commit
  3. 17 Oct, 2023 1 commit
  4. 28 Sep, 2023 1 commit
  5. 27 Sep, 2023 1 commit
    • Ted Themistokleous's avatar
      Modify reshapes (#2099) · 7e5ccd4b
      Ted Themistokleous authored
      Modify reshapes to use reshape_lazy for aliasing and then reshape for a reshape copy operation to eliminate contiguous
      7e5ccd4b
  6. 16 Sep, 2023 1 commit
    • Charlie Lin's avatar
      `fill` ref operator (#2087) · 0da1037f
      Charlie Lin authored
      Implements a fill operator that sets the values in an output buffer to a given value
      Will be used when parsing ONNX ConstantOfShape
      Can also be used when a buffer needs to be filled with a value that is determined at runtime
      0da1037f
  7. 14 Sep, 2023 1 commit
    • Brian Pickrell's avatar
      added rand_uniform operation closes #1958 (#2051) · fbd12bd3
      Brian Pickrell authored
      New op that populates a shape with random numbers with a uniform distribution. The rand_uniform op. can implement the Onnx RandomUniform instruction, and can also create the random number sequence necessary to implement Multinomial. (At this time, our Onnx Multinomial parsing generates a random sequence of numbers when parsing as a workaround, so that the resulting program uses the same "random" set every time.)
      
      Arguments: shape, seed. Shape is required; can be static or dynamic. Seed is still optional in this version. If it's not given at inference time, use the value in the creation attribute seed. Update: deleted A boolean use_auto_seed causes any given seed to be ignored.
      fbd12bd3
  8. 10 Sep, 2023 1 commit
    • Charlie Lin's avatar
      Dynamic allocate (#2079) · ede8bfa6
      Charlie Lin authored
      Makes a version of allocate that takes in dimensions and allocates a buffer
      Going to create a simplify_dynamic_ops compiler pass that will use the use_shape_attr flag
      The ONNX op ConstantOfShape needs the buffer to be filled with a specific value, so going to make another PR for that and a fill operator
      ede8bfa6
  9. 29 Aug, 2023 1 commit
    • Brian Pickrell's avatar
      Fix dyn pooling (#1768) · 7b8a28f5
      Brian Pickrell authored
      Adds support for dynamic input shape in pooling operator along with auto-padding. This combination requires that the padding (and therefore the output shape) can't be computed until runtime.
      7b8a28f5
  10. 18 Aug, 2023 2 commits
  11. 31 Jul, 2023 1 commit
    • Lakhinder Walia's avatar
      Lw/fix half shape (#2000) · e4dc75ea
      Lakhinder Walia authored
      * Use shape of Instruction (instead of a default) in add_return()
      
      * Instruction validation fix: not to use a default shape value for comparison
      
      * Fix instruction::replace() to recompute shape for "@return"
      
      * handle the case of missing shape in an Instruction related Test
      
      * use compute_shape() to get op shapes + test case for tuple_type
      
      * add test case shape_test/return_shape_tuple
      
      * Add test for @return to check for half type
      
      * Move @return unit-tests around..; Address review comments
      
      * Broken comparison fix: comparison to a (default) shape of tuple_type
      
      * Test cases: (add) return_shape_empty & (modify) return_shape_tuple
      
      * modify the assert() statement
      e4dc75ea
  12. 25 Jul, 2023 1 commit
  13. 23 Jul, 2023 1 commit
  14. 13 Jul, 2023 1 commit
    • Charlie Lin's avatar
      Update deconvolution -> convolution_backwards and Dynamic Shape Support (#1801) · 4edf1195
      Charlie Lin authored
      Renames deconvolution -> convolution_backwards to be more consistent with the literature
      Note: this is not the cross-correlation operator (which is the adjoint of convolution). This is technically a standard convolution operator combined with an upsampling operator rather than a downsampling operator.
      Adds unit tests for the padding, strides, dilations, and other op attributes.
      Throws on auto_pad attribute since it has not been implemented
      Previously it read the attribute and set it but then did nothing with it
      Extended for dynamic shapes
      Does not support using asymmetric padding (padding_L != padding_R) and output_shape with dynamic shapes.
      4edf1195
  15. 16 Jun, 2023 1 commit
  16. 15 Jun, 2023 1 commit
    • Brian Pickrell's avatar
      fix parse_instancenorm to create broadcast and multibroadcast instruc… (#1715) · 41ba30d5
      Brian Pickrell authored
      * fix parse_instancenorm to create broadcast and multibroadcast instructions with two dynamic shape arguments instead of 1.  Their make_op() functions don't support dynamic shapes when called with one input.  This caused an error when parsing an ONNX 3duunet model
      
      * Use add_common_op() to create multibroadcast op.
      
      * add verification and parsing test for instance_norm with dynamic input.  Parse test doesn't pass.
      
      * fix for test; still doesn't pass
      
      * another fix for test; still doesn't pass
      
      * work in progress, instance_norm_dyn_batch_test works but instance_norm_test doesn't
      
      * fix onnx instancenorm tests to match parser changes.  Passes all check tests
      
      * Updated comments explaining usage of add_common_op()
      
      * hand-merged conflicts with develop
      
      * fix instance_norm_half_test after merge
      
      * add Onnx test instance_norm_dyn_batch_half_test
      
      * add shape test cases broadcast_1in_dyn_error and multibroadcast_1in_dyn_error_0
      41ba30d5
  17. 12 Jun, 2023 1 commit
  18. 17 May, 2023 1 commit
    • shivadbhavsar's avatar
      scalar unsqueeze broadcast support (#1753) · 2140fe19
      shivadbhavsar authored
      Adding support for broadcasted scalars to unsqueeze op.
      
      Specifying steps other than 1 is disallowed in this implementation since we want the output the always be a tensor. We can support varying step sizes if we allow a broadcasted scalar output from this op.
      2140fe19
  19. 18 Apr, 2023 1 commit
  20. 07 Apr, 2023 1 commit
  21. 04 Apr, 2023 1 commit
    • Charlie Lin's avatar
      Refactor dynamic_dimension to have multiple optimals (#1625) · e7ec374f
      Charlie Lin authored
      Makes the optimals into a std::set<std::size_t>
      Changes shape object functions to handle the opts change
      Changes to convolution, flatten, pooling, and convolution in that they no longer calculate the output optimal dimensions. Instead returns empty opts. Will need to change this in the future if we want to support dynamic shapes fully.
      Many changes to tests and shape calls with respect to the new optimals
      e7ec374f
  22. 03 Apr, 2023 1 commit
  23. 28 Feb, 2023 1 commit
    • Charlie Lin's avatar
      Select module op (#1569) · a63ee2e0
      Charlie Lin authored
      Creates the select_module operator that selects one of the submodules passed to it to run based on the submodule parameters.  The submodule is selected by having the exact same static shapes for the arguments to select_module as the parameters in the submodule
      a63ee2e0
  24. 15 Feb, 2023 1 commit
    • Brian Pickrell's avatar
      Dyn slice (#1503) · 102c6bdb
      Brian Pickrell authored
      Add dynamic shape support to slice operator.
      
      First draft of this feature doesn't support ops slicing non-fixed, dynamic axes. Resulting shape in such cases is not guaranteed.* Also, onnx parsing doesn't support any arguments other than "axes".
      102c6bdb
  25. 11 Feb, 2023 1 commit
  26. 10 Feb, 2023 1 commit
  27. 03 Feb, 2023 1 commit
  28. 02 Feb, 2023 1 commit
  29. 30 Jan, 2023 1 commit
  30. 17 Jan, 2023 2 commits
    • Charlie Lin's avatar
      Dynamic ref reshape (one non-fixed case) (#1500) · 3f49f8eb
      Charlie Lin authored
      Extends reshape to handle the case of a single non-fixed dynamic_dimension
      3f49f8eb
    • Charlie Lin's avatar
      Dynamic ref pad (#1487) · 8202e411
      Charlie Lin authored
      Extends pad operator to handle dynamic input shapes
      Only handles computing the shape for adding constant padding to a dynamic shape
      - adds the padding to the min, max, and opt values (unless opt is 0, where it keeps it 0)
      - does not handle reflect padding with dynamic shapes
      8202e411
  31. 04 Jan, 2023 1 commit
  32. 08 Dec, 2022 3 commits
    • Charlie Lin's avatar
      Dynamic ref dot operator (#1457) · d411aa69
      Charlie Lin authored
      Extends dot MIGX operator to handle dynamic input shapes
      Only allow dot between two dynamic shapes that have exactly matching outer dimensions
      Inner dimensions must also match correspondingly
      Updates dot related tests
      Change check_shapes to use shape.ndim()
      ONNX parsers for GEMM and MatMult will be updated in a separate PR
      d411aa69
    • Charlie Lin's avatar
      Dynamic reference Softmax (#1475) · 8e7d2efe
      Charlie Lin authored
      No major changes required, use dyn_output and pass dynamic shape when calling compute_shape()
      Adds dynamic shape tests
      8e7d2efe
    • Charlie Lin's avatar
      Dynamic ref flatten (#1482) · 4c32afcc
      Charlie Lin authored
      Changes flatten's compute_shape() to handle dynamic shapes
      Calculates the flattened shape with the min, max, and opt
      4c32afcc
  33. 07 Dec, 2022 1 commit
  34. 06 Dec, 2022 1 commit
  35. 02 Dec, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref pooling (#1449) · 0e40ebaa
      Charlie Lin authored
      Extends the pooling operators for dynamic shape inputs
      
      AveragePooling
      GlobalAveragePooling
      MaxPooling
      GlobalMaxPooling
      LpNormPooling
      GlobalLpNormPooling
      y.github.com>
      0e40ebaa
  36. 28 Nov, 2022 1 commit