1. 01 Dec, 2023 1 commit
  2. 22 Nov, 2023 1 commit
  3. 21 Nov, 2023 1 commit
  4. 14 Oct, 2023 1 commit
  5. 11 Oct, 2023 2 commits
  6. 10 Sep, 2023 1 commit
  7. 08 Aug, 2023 1 commit
  8. 07 Aug, 2023 1 commit
  9. 13 Jul, 2023 1 commit
    • Charlie Lin's avatar
      Update deconvolution -> convolution_backwards and Dynamic Shape Support (#1801) · 4edf1195
      Charlie Lin authored
      Renames deconvolution -> convolution_backwards to be more consistent with the literature
      Note: this is not the cross-correlation operator (which is the adjoint of convolution). This is technically a standard convolution operator combined with an upsampling operator rather than a downsampling operator.
      Adds unit tests for the padding, strides, dilations, and other op attributes.
      Throws on auto_pad attribute since it has not been implemented
      Previously it read the attribute and set it but then did nothing with it
      Extended for dynamic shapes
      Does not support using asymmetric padding (padding_L != padding_R) and output_shape with dynamic shapes.
      4edf1195
  10. 08 Jul, 2023 1 commit
    • Artur Wojcik's avatar
      export API symbols from dynamic libraries (#1892) · c04fbc92
      Artur Wojcik authored
      Export API symbols for migraphx, migraphx_ref, migraphx_cpu, migrphx_gpu, migraphx_device, migraphx_tf, and migraphx_onnx. There is a separate PR for migrahx_c.
      
      API symbol exporting affects only Windows. It is transparent on Linux.
      c04fbc92
  11. 29 Jun, 2023 1 commit
  12. 05 Apr, 2023 1 commit
  13. 01 Apr, 2023 1 commit
  14. 18 Mar, 2023 1 commit
  15. 02 Nov, 2022 1 commit
  16. 27 Oct, 2022 1 commit
  17. 19 Oct, 2022 1 commit
    • Charlie Lin's avatar
      Refactor dynamic compute; Dynamic ref unary functions (#1407) · 693cb5d8
      Charlie Lin authored
      Refactor dynamic compute
      - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object
      - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes
        change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to 
        use dyn_output object
      
      Dynamic ref unary functions
      -  Included these changes to have an example of the refactored dynamic compute being used
      -  Changes to unary base class to handle dynamic shapes
      -  Changed elu and leaky_relu to use unary base class and pointwise JIT
      693cb5d8
  18. 13 Oct, 2022 1 commit
  19. 27 Sep, 2022 1 commit
  20. 06 Sep, 2022 1 commit
  21. 06 Jul, 2022 1 commit
    • Paul Fultz II's avatar
      Verify load and save (#1265) · f2531606
      Paul Fultz II authored
      *In the verification tests, check that saving and reloading the program is the same program. This also fixes serialization to always load instructions in the same order. There is also fixes for deconv and quant_conv which didn't save the solution id, and was broken for serialization.
      f2531606
  22. 22 Jun, 2022 1 commit
  23. 17 Jun, 2022 1 commit
    • kahmed10's avatar
      Create allocate op and replace_allocate pass (#1183) · add6fb3b
      kahmed10 authored
      
      
      * add allocate op header
      
      * formatting
      
      * add replace_allocate pass
      
      * formatting
      
      * move output param to remove_allocate pass
      
      * formatting
      
      * fix bugs in replace_allocate pass
      
      * formatting
      
      * fix verify if tests
      
      * formatting
      
      * move if op logic
      
      * formatting
      
      * cleanup lowering
      
      * cleanup lowering
      
      * formatting
      
      * fix tidy
      
      * formatting
      
      * fix tidy
      
      * add cpu allocate check
      
      * formatting
      
      * change cpu allocate in pass
      
      * formatting
      
      * add some tests for replace_allocate pass
      
      * formatting
      
      * pass by ref
      
      * fix run_pass
      
      * formatting
      
      * update variable name for module
      
      * update dce to use contains() and fix tidy
      
      * formatting
      
      * update cppcheck
      
      * add if test
      
      * formatting
      
      * add if test
      
      * rename var to mod_output_names
      
      * formatting
      
      * remove conditional
      
      * update allocate op and tests
      
      * formatting
      
      * update replace_allocate tests
      
      * update create_output_names() and conditional in replace_allocate
      
      * formatting
      
      * remove extra variable in replace_allocate
      
      * update tools script for allocation_model
      Co-authored-by: default avatarUmang Yadav <29876643+umangyadav@users.noreply.github.com>
      Co-authored-by: default avatarChris Austen <causten@users.noreply.github.com>
      Co-authored-by: default avatarPaul Fultz II <pfultz2@yahoo.com>
      add6fb3b
  24. 26 May, 2022 1 commit
  25. 11 May, 2022 1 commit
  26. 06 May, 2022 1 commit
  27. 19 Apr, 2022 1 commit
    • Charlie Lin's avatar
      Refactor Pooling and implement ONNX LpPool and GlobalLpPool (#1152) · 764273e4
      Charlie Lin authored
      Refactored the reference implementation of pooling to something like what was done for roialign. Moved the reference implementation of pooling from targets/ref/lowering.cpp to pooling.hpp.
      Removed cpu_pooling, instead using reference pooling in pooling.hpp
      Added reference implementation of Lp Norm pooling and the global version
      Added tests for the Lp Norm Pooling
      764273e4
  28. 04 Mar, 2022 1 commit
    • bpickrel's avatar
      Mode as enum for pooling and roi_align (#1091) · a2e90b5d
      bpickrel authored
      Changed the pooling values for two structures from strings to specialized enum classes. Many test and operator parsing changes to support this. Introduces one new source file, op_enums.cpp.
      a2e90b5d
  29. 02 Mar, 2022 1 commit
  30. 05 Nov, 2021 1 commit
  31. 19 Oct, 2021 1 commit
  32. 08 Oct, 2021 1 commit
    • Umang Yadav's avatar
      Remove alpha and beta from `dot` and `quant_dot` (#961) · 21193e87
      Umang Yadav authored
      Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs.
      
      Aim is to have the definition of dot operator as C = A . B without having alpha or beta.
      
      In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
      21193e87
  33. 17 Sep, 2021 2 commits
    • Paul Fultz II's avatar
      985f58b0
    • Umang Yadav's avatar
      Remove alpha and beta attributes from dot operator (#945) · 9e43cb8b
      Umang Yadav authored
      This PR aims to remove alpha and beta attributes from dot operator completely.
      
      Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs.
      
      Aim is to have the definition of dot operator as C = A . B without having alpha or beta.
      
      In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
      9e43cb8b
  34. 31 Aug, 2021 1 commit
    • kahmed10's avatar
      Changes to support both OneDNN and ZenDNN builds (#929) · 0859fe90
      kahmed10 authored
      
      
      * Add preallocate method
      
      * Add preallocate_param pass
      
      * Preallocate buffers on the cpu
      
      * Formatting
      
      * Preallocate on the gpu
      
      * Add missing cpp file
      
      * Formatting
      
      * Add lifetime function
      
      * Formatting
      
      * Improve handling of exceptions in test driver
      
      * Formatting
      
      * Auto print exception
      
      * Formatting
      
      * Fork each test case
      
      * Formatting
      
      * Exclude gcc 5 debug build
      
      * Fix tidy issues
      
      * Add color
      
      * Formatting
      
      * Create driver class
      
      * Formatting
      
      * Customize test_case names
      
      * Formatting
      
      * Report status from forked processes
      
      * Formatting
      
      * Update the verify driver
      
      * Formatting
      
      * Print out failed tests
      
      * Formatting
      
      * Fix tidy issues
      
      * Formatting
      
      * Expect passing
      
      * Improve failure reporting on non-linux systems
      
      * Fix ifdef
      
      * Always allocate
      
      * Fix tidy warning
      
      * Flush code code cov
      
      * Formatting
      
      * Fix tidy
      
      * Add const
      
      * Check if weak symbols is linked
      
      * Formatting
      
      * initial progress
      
      * formatting
      
      * Add continue flag
      
      * Formatting
      
      * Set exe name
      
      * Use stringstream and use quotes
      
      * rename vars
      
      * formatting
      
      * more testing
      
      * formatting
      
      * Fix bug when using --continue in the tests
      
      * Formatting
      
      * revert gemm
      
      * revert dot file
      
      * rename var
      
      * update cmakelists and deconv compute
      Co-authored-by: default avatarPaul <pfultz2@yahoo.com>
      Co-authored-by: default avatarmvermeulen <5479696+mvermeulen@users.noreply.github.com>
      0859fe90
  35. 18 Aug, 2021 1 commit
    • turneram's avatar
      Optimize Q/DQ Format Pass (#889) · 0b5f33b6
      turneram authored
      * Add operators, refactor parsers, add rewrite passes, add tests
      
      * Add ref implementations
      
      * Move broadcasting of scales and zero points to onnx parser
      
      * Allow for x and zero_point to have different types in quantizelinear; fix zero_point default type
      
      * Switch certain variables to int64_t
      
      * Fix overflow in implicit constant conversion
      
      * Remove operators.hpp from includes in tf_test.cpp
      
      * Add conversion for int32 input to quantizelinear and add test case; remove operators.hpp from onnx_test.cpp includes
      
      * Switch dequantizelinear math from int32 to float
      
      * Remove changes to operators.hpp
      
      * Simplify apply_quantizelinear
      
      * Add verify test for int32 data
      
      * Add rewrite_quantization back to CMakeLists
      
      * Add passes to insert qdq after add_bias is applied, replace quant_ops, and remove remaining qdq pairs
      
      * Renaming, refactoring, cleaning up code, adding formal test, and adding passes to targets
      
      * Renaming, review comments, begin adding more specific tests
      
      * Add more specific unit tests
      
      * Fix failing test on CI
      
      * Correct matcher and update qop rewriting, update tests and add more tests
      
      * Update matcher, clean up simplify_qdq, tweak tests
      
      * Add tests, remove pass from CPU target, update dot parameters, clean up simplify_qdq
      
      * Fix correctness bug in ref q/dq implementations; edit gemm parser to make beta always 0.0
      
      * Remove unused variables in onnx gemm tests
      0b5f33b6
  36. 15 Jul, 2021 1 commit
    • turneram's avatar
      Quantize linear ops (#843) · 3282e01a
      turneram authored
      * Add operators, refactor parsers, add rewrite passes, add tests
      
      * Formatting
      
      * Fix cppcheck
      
      * Review comments
      
      * Formatting
      
      * Combine rewrite passes
      
      * Formatting
      
      * Add ref implementations
      
      * Formatting
      
      * Review comments
      
      * Formatting
      
      * Tidy warnings
      
      * Apply review comments
      
      * Formatting
      
      * Fix CI error
      
      * Formatting
      
      * Increase code coverage
      
      * Formatting
      
      * Move broadcasting of scales and zero points to onnx parser
      
      * Formatting
      
      * Allow for x and zero_point to have different types in quantizelinear; fix zero_point default type
      
      * Formatting
      
      * Increase code coverage
      
      * Formatting
      
      * Switch certain variables to int64_t
      
      * Formatting
      
      * Fix overflow in implicit constant conversion
      
      * Formatting
      
      * Increase code coverage
      
      * Formatting
      
      * Remove operators.hpp from includes in tf_test.cpp
      
      * Formatting
      
      * Add conversion for int32 input to quantizelinear and add test case; remove operators.hpp from onnx_test.cpp includes
      
      * Formatting
      
      * Switch dequantizelinear math from int32 to float
      
      * Formatting
      
      * Remove changes to operators.hpp
      
      * Simplify apply_quantizelinear
      
      * Formatting
      
      * Add verify test for int32 data
      
      * Add rewrite_quantization back to CMakeLists
      3282e01a
  37. 08 Jul, 2021 1 commit
  38. 09 Jun, 2021 1 commit
    • kahmed10's avatar
      Asym pad refactor (#791) · 9a5e0c06
      kahmed10 authored
      
      
      * alternative impl
      
      * formatting
      
      * add gpu pass to insert pad
      
      * formatting
      
      * update onnx test, still need cleanup
      
      * formatting
      
      * update tf_test
      
      * modify existing tests
      
      * formatting
      
      * remove print
      
      * code cleanup
      
      * formatting
      
      * code cleanup
      
      * formatting
      
      * fix tidy and cppcheck
      
      * remove variable
      
      * add test
      
      * formatting
      
      * add test and address comments
      
      * formatting
      Co-authored-by: default avatarShucai Xiao <shucai@gmail.com>
      Co-authored-by: default avatarmvermeulen <5479696+mvermeulen@users.noreply.github.com>
      9a5e0c06