1. 08 Dec, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref flatten (#1482) · 4c32afcc
      Charlie Lin authored
      Changes flatten's compute_shape() to handle dynamic shapes
      Calculates the flattened shape with the min, max, and opt
      4c32afcc
  2. 07 Dec, 2022 4 commits
  3. 06 Dec, 2022 1 commit
  4. 05 Dec, 2022 4 commits
  5. 02 Dec, 2022 2 commits
  6. 01 Dec, 2022 1 commit
  7. 28 Nov, 2022 1 commit
  8. 17 Nov, 2022 1 commit
  9. 13 Nov, 2022 1 commit
    • Charlie Lin's avatar
      Dyn ref multibroadcast; dyn binary (#1423) · d73c6d7c
      Charlie Lin authored
      Updated Multibroadcast op to have a two input version for dynamic shapes
      Current dynamic shape broadcasting logic
      dynamic_dimensions must be the same or one of them is {1, 1, 0} or {1, 1, 1}
      Works for dyn-dyn, dyn-static, and static-static shape combinations
      Changed common.cpp for multibroadcasting for binary ops with dynamic shapes
      Extended binary.hpp for dynamic shapes to test the new common.cpp stuff
      d73c6d7c
  10. 08 Nov, 2022 1 commit
  11. 07 Nov, 2022 1 commit
  12. 03 Nov, 2022 1 commit
  13. 02 Nov, 2022 1 commit
  14. 01 Nov, 2022 1 commit
  15. 31 Oct, 2022 2 commits
  16. 27 Oct, 2022 1 commit
  17. 26 Oct, 2022 1 commit
  18. 24 Oct, 2022 1 commit
  19. 19 Oct, 2022 1 commit
    • Charlie Lin's avatar
      Refactor dynamic compute; Dynamic ref unary functions (#1407) · 693cb5d8
      Charlie Lin authored
      Refactor dynamic compute
      - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object
      - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes
        change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to 
        use dyn_output object
      
      Dynamic ref unary functions
      -  Included these changes to have an example of the refactored dynamic compute being used
      -  Changes to unary base class to handle dynamic shapes
      -  Changed elu and leaky_relu to use unary base class and pointwise JIT
      693cb5d8
  20. 14 Oct, 2022 1 commit
  21. 13 Oct, 2022 1 commit
    • Charlie Lin's avatar
      Refactor dynamic padding mode (#1387) · 32f6388c
      Charlie Lin authored
      Removes use_dynamic_same_auto_pad
      Change padding_mode to be used for dynamic padding
      Move compute_padded_shape to pad_calc.cpp as it will be used in other dynamic padding cases
      Fix same_lower compute_padded_shape bug and add a test.
      32f6388c
  22. 11 Oct, 2022 1 commit
  23. 03 Oct, 2022 1 commit
  24. 30 Sep, 2022 1 commit
  25. 28 Sep, 2022 1 commit
  26. 27 Sep, 2022 1 commit
  27. 26 Sep, 2022 1 commit
    • Charlie Lin's avatar
      Rewrite ONNX parse batch norm (#1362) · c00f8202
      Charlie Lin authored
      Rewrites the BatchNormalization ONNX operator into other MIGX operators
      - Added handling of 1D input tensor case (edge case in ONNX spec)
      Removes the spatial and per_activation functionality (not in the ONNX spec)
      - Did not remove the batch_norm_inference related code as the TensorFlow parser still uses it
      - Can remove that code when the TF version is updated
      c00f8202
  28. 16 Sep, 2022 1 commit
  29. 08 Sep, 2022 1 commit
  30. 23 Aug, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref NMS (#1288) · fa3c21fa
      Charlie Lin authored
      Has NMS op output a dynamic shape (ONNX spec behavior)
      Allows for dynamic input shape to NMS op
      fa3c21fa
  31. 12 Aug, 2022 1 commit
  32. 08 Aug, 2022 1 commit
    • Ted Themistokleous's avatar
      Imply type of literal returned based on input protobuff for zero elem… (#1326) · bb0e04ce
      Ted Themistokleous authored
      * Imply type of literal returned based on input protobuff for zero element constant values.
      
      This saves us the default behavior as the onnx parsing assumes that every zero value is float. This way we're still grabbing relevant type information from the protobuff instead and wont fail our data type checks for if them/else blocks from onnx
      
      * Revert "Imply type of literal returned based on input protobuff for zero element constant values."
      
      This reverts commit 390bb853
      
      .
      
      * Add  test case to parse in empty constant int64 proto buffer
      
      I think the previous test case was aliasing an issue where we default to float but need to actually read in int64 instead of int32
      
      * fixup! Add  test case to parse in empty constant int64 proto buffer
      
      * Add test for non empty int64 scalar
      
      Add one item in the np array to use for the constant we're parsing in.
      
      * Draft partial fix
      
      * Fix test failures from previous change to read in protobuf data types correctly for empty constants.
      
      Instead of assuming things are empty and thus we default to float, reading in the correct types broke some assumptions code was using for an empty literal.
      
      * Fix formatting and naming
      
      * Fix naming with var in constant_one_val_int64_test
      Co-authored-by: default avatarcharlie <charlie.lin@amd.com>
      Co-authored-by: default avatarkahmed10 <15948690+kahmed10@users.noreply.github.com>
      bb0e04ce