1. 15 Feb, 2023 3 commits
  2. 14 Feb, 2023 2 commits
    • charlie's avatar
      Somehow this verify test works · 996426be
      charlie authored
      * Changed the allocates to occur in the submodules
        * Incomplete, as the use_local_alloc variable in module does not work
        properly
      * added a hip::sync_stream before the return
      * not sure why the hip::sync_stream gets rid of the dangling reference
      error (code-wise it's because hip::sync_stream's output alias is -1)
      996426be
    • Paul Fultz II's avatar
      Add serialization of tuples and optional types (#1495) · 41bf982d
      Paul Fultz II authored
      * Add serialization of tuples and optional types
      41bf982d
  3. 13 Feb, 2023 1 commit
  4. 11 Feb, 2023 1 commit
  5. 10 Feb, 2023 1 commit
  6. 09 Feb, 2023 1 commit
  7. 08 Feb, 2023 1 commit
  8. 06 Feb, 2023 2 commits
  9. 03 Feb, 2023 2 commits
  10. 02 Feb, 2023 1 commit
  11. 01 Feb, 2023 1 commit
    • Ted Themistokleous's avatar
      Parse if inline constant args (#1533) · ca15cd37
      Ted Themistokleous authored
      Allows migraphx to inline the IF operator when we run into an IF that can be evaluated at compile time, thus avoiding us injecting IF and just inserting the instructions directly.
      ca15cd37
  12. 31 Jan, 2023 2 commits
    • charlie's avatar
      First pass on the operator · 0b0a6d4f
      charlie authored
      only works if exact batch size submodule present
      will need to make it assemble from other sizes later
      0b0a6d4f
    • Umang Yadav's avatar
      hipRTC fixes (#1531) · 91cc7242
      Umang Yadav authored
      Added CMakeFlag for hipRTC. MIGRAPHX_USE_HIPRTC.
      Added stages in Jenkins for hipRTC.
      Fixes for some of the pending issues from hipRTC.
      91cc7242
  13. 30 Jan, 2023 2 commits
  14. 24 Jan, 2023 2 commits
  15. 21 Jan, 2023 1 commit
  16. 17 Jan, 2023 4 commits
    • Charlie Lin's avatar
      Dynamic ONNX Gemm (#1459) · 8b651eee
      Charlie Lin authored
      Extends ONNX Gemm parser to handle dynamic input shapes
      Limits ONNX Gemm parsing to 2D input tensors for A and B inputs
      As per the ONNX specifications
      Changed Gemm ONNX tests to 2D input versions
      Add onnx_verify tests for Gemm
      Parsing ONNX Gemm links to more than one operator, checking that it produces the correct result
      8b651eee
    • Charlie Lin's avatar
      Dynamic ref reshape (one non-fixed case) (#1500) · 3f49f8eb
      Charlie Lin authored
      Extends reshape to handle the case of a single non-fixed dynamic_dimension
      3f49f8eb
    • Paul Fultz II's avatar
    • Charlie Lin's avatar
      Dynamic ref pad (#1487) · 8202e411
      Charlie Lin authored
      Extends pad operator to handle dynamic input shapes
      Only handles computing the shape for adding constant padding to a dynamic shape
      - adds the padding to the min, max, and opt values (unless opt is 0, where it keeps it 0)
      - does not handle reflect padding with dynamic shapes
      8202e411
  17. 13 Jan, 2023 2 commits
  18. 11 Jan, 2023 3 commits
  19. 09 Jan, 2023 1 commit
  20. 04 Jan, 2023 1 commit
  21. 13 Dec, 2022 2 commits
  22. 08 Dec, 2022 4 commits
    • Charlie Lin's avatar
      Dynamic ref dot operator (#1457) · d411aa69
      Charlie Lin authored
      Extends dot MIGX operator to handle dynamic input shapes
      Only allow dot between two dynamic shapes that have exactly matching outer dimensions
      Inner dimensions must also match correspondingly
      Updates dot related tests
      Change check_shapes to use shape.ndim()
      ONNX parsers for GEMM and MatMult will be updated in a separate PR
      d411aa69
    • Charlie Lin's avatar
      Dynamic reference Softmax (#1475) · 8e7d2efe
      Charlie Lin authored
      No major changes required, use dyn_output and pass dynamic shape when calling compute_shape()
      Adds dynamic shape tests
      8e7d2efe
    • Charlie Lin's avatar
      Dynamic ref flatten (#1482) · 4c32afcc
      Charlie Lin authored
      Changes flatten's compute_shape() to handle dynamic shapes
      Calculates the flattened shape with the min, max, and opt
      4c32afcc
    • shivadbhavsar's avatar
      fix issues with compiling lstm ops in fp16 mode (#1450) · 352c2465
      shivadbhavsar authored
      Currently, quantizing a program with rnn layers to fp16 results in segmentation faults due to a "convert" operation being applied to an "undefined" instruction.
      
      The following changes are implemented to fix this issue:
      
      Added is_undefined method to the instruction class that returns true if all inputs to the instruction are from an undefined op.
      Updated rewrite_rnn pass to use the new is_undefined method rather than checking ins->name()
      Updated the dead_code_elimination pass to also use this new method rather than only checking the instruction name
      352c2465