1. 28 Nov, 2022 1 commit
  2. 17 Nov, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref contiguous (#1445) · 95d82a51
      Charlie Lin authored
      Extends the ref contiguous operator to handle dynamic shapes
      Updates the eliminate_contiguous pass to use the dyn_output struct
      95d82a51
  3. 13 Nov, 2022 1 commit
    • Charlie Lin's avatar
      Dyn ref multibroadcast; dyn binary (#1423) · d73c6d7c
      Charlie Lin authored
      Updated Multibroadcast op to have a two input version for dynamic shapes
      Current dynamic shape broadcasting logic
      dynamic_dimensions must be the same or one of them is {1, 1, 0} or {1, 1, 1}
      Works for dyn-dyn, dyn-static, and static-static shape combinations
      Changed common.cpp for multibroadcasting for binary ops with dynamic shapes
      Extended binary.hpp for dynamic shapes to test the new common.cpp stuff
      d73c6d7c
  4. 27 Oct, 2022 2 commits
  5. 19 Oct, 2022 1 commit
    • Charlie Lin's avatar
      Refactor dynamic compute; Dynamic ref unary functions (#1407) · 693cb5d8
      Charlie Lin authored
      Refactor dynamic compute
      - add a compute_output_shape object that implicitly converts to a new dyn_output or shape object
      - dyn_output object can handle computing the static output shape of an operator given the input arguments shapes
        change an operator's compute function to argument compute(const dyn_output& dyn_out, std::vector<argument> args) to 
        use dyn_output object
      
      Dynamic ref unary functions
      -  Included these changes to have an example of the refactored dynamic compute being used
      -  Changes to unary base class to handle dynamic shapes
      -  Changed elu and leaky_relu to use unary base class and pointwise JIT
      693cb5d8
  6. 13 Oct, 2022 2 commits
  7. 26 Sep, 2022 1 commit
  8. 06 Sep, 2022 1 commit
  9. 23 Aug, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref NMS (#1288) · fa3c21fa
      Charlie Lin authored
      Has NMS op output a dynamic shape (ONNX spec behavior)
      Allows for dynamic input shape to NMS op
      fa3c21fa
  10. 04 Aug, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref convolution op (#1224) · 67f77ac1
      Charlie Lin authored
      
      
      * Dynamic shape handling in shape object
      
      * rewrite empty lens multibroadcast test
      
      * Shape class changes to handle dynamic
      * More throw errors for functions that don't make sense for dynamic shape
      * Print output changes
      * Serialization changes
      
      * Fixing serialization errors
      
      * Remove const on dyn_dim copy getters
      
      * Dynamic shape tests
      
      * Fix serialize errors
      
      * Add dyn_data struct to avoid ambiguous constructor
      
      * Tidy fix: emplace_back() over for loop
      
      * Tidy fix: use move
      
      * Use std::initializer_list in constructor
      Reverts the dyn_data struct change
      Should get around the ambiguous braced initialization list error
      
      * avoid typedef
      
      * element_space, min,max,opt _lens change
      
      * formatting
      
      * Comments fix
      
      * dynamic bytes() test
      
      * Seralize and reflect changes
      
      * formatting
      
      * Test the dynamic lens functions
      
      * progress
      
      * Formatting
      
      * Dynamic conv draft progress
      
      * Add operator<< tests for coverage
      
      * Coverage update
      
      * Add to conv dynamic batch test
      
      * Dynamic image size test
      
      * Dynamic weight handling
      
      * Dyn image shape test change, fix dyn weight cond
      
      * Comment update
      
      * Dynamic weights shape test and fix
      
      * Use ternary operator
      
      * Tidy fixes
      
      * Handle dynamic graph input shapes in ONNX parser
      
      * Formatting
      
      * Handle dynamic shape for convolution
      
      * formatting
      
      * cppcheck fixes
      
      * Add onnx test files
      
      * Fix typo
      
      * Disable auto_pad for dynamic input shape
      
      * check_shapes object checks for allowing dynamic shapes
      
      * Fix any_of
      
      * Change to maintain const objectness
      
      * Formatting
      
      * Check shapes allow dynamic
      
      * Refactor compute_shape() call into op.compute()
      Allows for per operator differences with handling dynamic shape
      Fix operation.hpp change to use the generator
      
      * Comment fix
      
      * Refactor normalize_attributes() calls to use max_lens()
      
      * Comment addition
      
      * Update other normalize_attributes() calls
      
      * Change to using constructor and add tests
      
      * Use const member function
      
      * Add more dynamic shape support
      
      * Add tests for error code coverage
      
      * Fix opt shape bug and add shape tests
      
      * capture all by ref
      
      * Fix typo with img shape calculation
      
      * Add more tests
      
      * dynamic auto pad attempt
      Linker error with pad_calc.cpp
      
      * Fix parse dyn auto_pad
      Should only need to use dynamic auto pad when the image shape or kernel
      shape are dynamic. For a dynamic batch size, the auto pad calculation is
      the same.
      
      * Fix linking error
      
      * Fix auto_pad bug
      Fixed input tensor with auto_pad setting on
      
      * auto_pad onnx tests
      
      * Fix auto_pad calculation, evaluate in ref_conv
      add ref_ops tests
      
      * Add shape tests, fix bugs
      
      * Refactor first two output dynamic len calculation
      
      * Conv MLIR test update
      
      * i64 MLIR test fix
      
      * Fix MLIR test typo
      Co-authored-by: default avatarChris Austen <causten@users.noreply.github.com>
      67f77ac1
  11. 25 Jul, 2022 1 commit
    • Ted Themistokleous's avatar
      Add onnx mod operator (#1302) · 77e80b8e
      Ted Themistokleous authored
      * Add in changes for onnx Mod operator
      
      Initial operator for mod implementation and test cases for integer and floating based types.
      
      Need to use fmod from stdlib for floating point types. half_float::half thankfully is specced to the use the existing std::fmod() call when looking at the half.hpp implementation.
      
      fmod_flag should mirror the onnx fmod attribute. Right now using a floating point type without setting that on the user side to true will result in an exception.
      
      Ref ticket #1283 
      77e80b8e
  12. 29 Jun, 2022 1 commit
  13. 22 Jun, 2022 1 commit
  14. 29 Apr, 2022 1 commit
  15. 19 Apr, 2022 1 commit
    • Charlie Lin's avatar
      Refactor Pooling and implement ONNX LpPool and GlobalLpPool (#1152) · 764273e4
      Charlie Lin authored
      Refactored the reference implementation of pooling to something like what was done for roialign. Moved the reference implementation of pooling from targets/ref/lowering.cpp to pooling.hpp.
      Removed cpu_pooling, instead using reference pooling in pooling.hpp
      Added reference implementation of Lp Norm pooling and the global version
      Added tests for the Lp Norm Pooling
      764273e4
  16. 11 Apr, 2022 1 commit
    • bpickrel's avatar
      scatter operator refactoring to include reduction (#1124) · 701c2014
      bpickrel authored
      Change the "scatter" struct and op to a base/child set of three: scatter_none, scatter_add, scatter_mul to mirror Onnx' ScatterElements op. and its three reduction options. (Onnx Scatter op is deprecated and is equivalent to scatter_none.)
      
      Provides both a reference op. and update to Onnx parsing. Tests updated and new test case added.
      701c2014
  17. 04 Mar, 2022 1 commit
    • bpickrel's avatar
      Mode as enum for pooling and roi_align (#1091) · a2e90b5d
      bpickrel authored
      Changed the pooling values for two structures from strings to specialized enum classes. Many test and operator parsing changes to support this. Introduces one new source file, op_enums.cpp.
      a2e90b5d
  18. 03 Mar, 2022 1 commit
  19. 02 Mar, 2022 1 commit
  20. 25 Nov, 2021 1 commit
    • Shucai Xiao's avatar
      Non std shape auto contiguous (#1001) · 2d4dcc47
      Shucai Xiao authored
      Resolves a problem in parsing the ssd-10 model.
      
      The problem is, after inserting contiguous in the auto_contiguous pass, standard output shape of some operators becomes non-standard. Then, if the next operator requires standard input shape, an exception is throw.
      
      For example, if we pass the following model:
      Input (standard shape) -> transpose (transposed) -> softmax (transposed) -> transpose (standard) -> gather.
      It works fine, and no contiguous is required.
      
      In the auto_contiguous pass, a contiguous is inserted after the first transpose. Then we need to replace the first transpose with the contiguous and recompute all shapes. When it comes to the gather operator, its input is a transposed shape, and an exception is thrown.
      
      The solution is in the recompute_shape() function. If it is called by the auto_contiguous pass and shape of an instruction is changed, and the shape is non_standard, we do not recompute shape of its output. The reason is: since its output shape is non_standard, a contiguous op will be added after the instruction, which will recompute shape for later operators.
      2d4dcc47
  21. 28 Oct, 2021 1 commit
  22. 20 Oct, 2021 1 commit
    • Shucai Xiao's avatar
      Roialign (#952) · d7653732
      Shucai Xiao authored
      Implementation of the roialign operator. For now, we have only the ref implementation. When we run a model on the GPU, we fall back the execution to use the ref implementation.
      d7653732
  23. 19 Oct, 2021 1 commit
  24. 08 Oct, 2021 1 commit
  25. 01 Oct, 2021 1 commit
    • turneram's avatar
      Add multinomial op (#954) · 0b7672d7
      turneram authored
      
      
      Add multinomial op to onnx parser with ref and GPU implementations.
      
      The onnx parser inserts a literal of shape {batch_size, sample_size} with random values in the range [0, 1) and inserts existing ops to compute the cumulative density function. The multinomial operator multiplies the random values by the sum of the CDF and returns the index of the first element of the CDF that is greater than the result, representing samples randomly drawn from [0, class_size) that follow the log-probability distribution.
      
      Resolves #821
      Co-authored-by: default avatarShucai Xiao <shucai@gmail.com>
      0b7672d7
  26. 07 Sep, 2021 1 commit
    • Shucai Xiao's avatar
      qdq for quantization and include subgraph (#891) · b45f7239
      Shucai Xiao authored
      
      
      Add operators, refactor parsers, add rewrite passes, add tests
      Add ref implementations
      Move broadcasting of scales and zero points to onnx parser
      Allow for x and zero_point to have different types in quantizelinear; fix zero_point default type
      fp16 and fp8 quantization to include subgraph and parameters
      fix unit test to use qdq operators for int8 quantization
      Co-authored-by: default avatarturneram <alturner@amd.com>
      b45f7239
  27. 02 Sep, 2021 2 commits
  28. 24 Aug, 2021 1 commit
    • Umang Yadav's avatar
      Change attributes names to be more consistent and reflect better meaning (#916) · 0d2606bb
      Umang Yadav authored
      * rename broadcast and multibroadcast output_lens attribute to out_lens attribute, and change tests and source code to reflect the same
      
      * change the reshape attribute from dims to out_lens
      
      * change transpose attribute's name from dims to perm to reflect better meaning
      
      * use permutation instead of perm for transpose
      
      clang formaating
      
      * use dims instead of out_lens for reshape
      
      clang formatting
      0d2606bb
  29. 15 Jul, 2021 1 commit
    • turneram's avatar
      Quantize linear ops (#843) · 3282e01a
      turneram authored
      * Add operators, refactor parsers, add rewrite passes, add tests
      
      * Formatting
      
      * Fix cppcheck
      
      * Review comments
      
      * Formatting
      
      * Combine rewrite passes
      
      * Formatting
      
      * Add ref implementations
      
      * Formatting
      
      * Review comments
      
      * Formatting
      
      * Tidy warnings
      
      * Apply review comments
      
      * Formatting
      
      * Fix CI error
      
      * Formatting
      
      * Increase code coverage
      
      * Formatting
      
      * Move broadcasting of scales and zero points to onnx parser
      
      * Formatting
      
      * Allow for x and zero_point to have different types in quantizelinear; fix zero_point default type
      
      * Formatting
      
      * Increase code coverage
      
      * Formatting
      
      * Switch certain variables to int64_t
      
      * Formatting
      
      * Fix overflow in implicit constant conversion
      
      * Formatting
      
      * Increase code coverage
      
      * Formatting
      
      * Remove operators.hpp from includes in tf_test.cpp
      
      * Formatting
      
      * Add conversion for int32 input to quantizelinear and add test case; remove operators.hpp from onnx_test.cpp includes
      
      * Formatting
      
      * Switch dequantizelinear math from int32 to float
      
      * Formatting
      
      * Remove changes to operators.hpp
      
      * Simplify apply_quantizelinear
      
      * Formatting
      
      * Add verify test for int32 data
      
      * Add rewrite_quantization back to CMakeLists
      3282e01a
  30. 28 Jun, 2021 1 commit
    • Umang Yadav's avatar
      Use C++ functions instead of hardcoded gold vales for comparison with MIGraphX... · c4411229
      Umang Yadav authored
      Use C++ functions instead of hardcoded gold vales for comparison with MIGraphX ops test evaluation (#863)
      
      * use c++ functions instead of hardcoded gold vales for comparison with migraphx
      
      * binary_ops transform
      
      * fix cppcheck
      
      * format fixes
      
      * rebase and fix tidy
      
      tidy fixes
      
      formatting
      
      * float type trigonometric functions
      
      * formatting
      
      * tidy fixes and review comments
      
      clang format
      
      * tidy fix
      c4411229
  31. 27 Jun, 2021 1 commit
    • Shucai Xiao's avatar
      Inline subgraph (#802) · bc52a8a8
      Shucai Xiao authored
      
      
      * Add definitions for all pointwise operators
      
      * Formatting
      
      * Add cpp generator class
      
      * Formatting
      
      * Move compilation to core
      
      * Formatting
      
      * Add clock to tmp name
      
      * Add dynamic loader
      
      * Formatting
      
      * Add tests for code gen
      
      * Formatting
      
      * Add test for literals
      
      * Formatting
      
      * Use with_char
      
      * Add missing header
      
      * Fix mismerge
      
      * Ignore tidy warning
      
      * Fxx gcc 5 errors
      
      * Apply fixits
      
      * Skip signed bitwise of status
      
      * Remove unused parameters
      
      * Explicitly add c++14 flag
      
      * Fix tidy warning
      
      * unify the compute function signature
      
      * clang format
      
      * make another change
      
      * unify the compute function
      
      * clang format
      
      * remove unnecessary code
      
      * more refinement about the operator compute funciton
      
      * clang format
      
      * add an overload function
      
      * clang format
      
      * add support for axes inputs for sequeeze/unsqueeze/reduce_sum
      
      * clang format
      
      * fix build problems
      
      * backup code changes
      
      * clang format
      
      * Add tuple type to shape class
      
      * Formatting
      
      * fix a bug in parsing quantizelinear operator
      
      * clang format
      
      * fix a cppcheck error
      
      * disable different versions of unit tests for different onnx version
      
      * clang format
      
      * upgrade onnx to 1.8
      
      * update onnx to 1.8.1
      
      * disable two more real models
      
      * clang format
      
      * Make data member private
      
      * Formatting
      
      * Add sub arguments
      
      * Formatting
      
      * Trun clang format off
      
      * Disable clang-format
      
      * fix review comments
      
      * fix the function of assign axes in parsing the squeeze operator
      
      * add unit tests and fix a bug
      
      * clang format
      
      * fix review comments
      
      * clang format
      
      * fix a build error
      
      * backup code changes
      
      * clang format
      
      * add more unit tests and add parsing opset version
      
      * clang format
      
      * Improve visiting tuples
      
      * Formatting
      
      * fix cppcheck error
      
      * adding installing the onnx package
      
      * resolve no protobuf compiler
      
      * add an inline subgraph pass
      
      * clang format
      
      * Add more argument tests
      
      * Formatting
      
      * Handle tuple in load
      
      * Formatting
      
      * code backup
      
      * clang format
      
      * Remove .o files
      
      * Add tuple type to api
      
      * Formatting
      
      * fix build errors
      
      * clang format
      
      * code backup
      
      * code backup
      
      * add unit tests for the inline subgraph
      
      * clang format
      
      * refine the inline subgraph and parse if operator
      
      * clang format
      
      * fix cppcheck issue
      
      * clang format
      
      * add unit test for inline subgraph pass
      
      * clang format
      
      * fix format issue
      
      * remove the context from the if operator
      
      * clang format
      
      * simplify the compute functions
      
      * Fix tidy warnings
      
      * fix cppcheck error
      
      * clang format
      
      * fix cppcheck error
      
      * Fix tidy warnings
      
      * fix a cppcheck error
      
      * clang format
      
      * Add a test for share method
      
      * Formatting
      
      * Add a test cpp_type
      
      * add unit tests for more code coverage
      
      * clang format
      
      * add unit tests to have more code coverage
      
      * clang format
      
      * try a comment in jenkins build
      
      * include the install onnnx line
      
      * code backup
      
      * reorder the dependenciesd installed
      
      * refine dockerfile
      
      * fix review comments
      
      * clang format
      
      * remove unnecessary overload function
      
      * fix cppcheck error
      
      * change back the argument test
      
      * Suppress tidy warning
      
      * add the operator get_tuple_elem
      
      * clang format
      
      * add get_tuple_elem to operator include file
      
      * chang if to support multiple operation outputs
      
      * clang format
      
      * optimize inline subgraph
      
      * clang format
      
      * code backup
      
      * clang format
      
      * fix bug
      
      * refine unit tests for tuple output of the if operator
      
      * clang format
      
      * refine a instruction replacement code
      
      * add a unit test and sort all the unit tests alphabetically
      
      * fix cppcheck error
      
      * add more unit tests for multiple op outputs
      
      * clang format
      
      * fix cppcheck error
      
      * Update pass manager to get modules after every pass
      
      * more unit test to cover more scenarios
      
      * clang format
      
      * fixed a bug in a unit test
      
      * add more tests
      
      * clang format
      
      * add more unit tests to have more code coverage
      
      * fix a bug in a unit test
      
      * Add program overload for module
      
      * Formatting
      
      * Hash modules for quicker lookup of modules
      
      * Bump file version
      
      * Add methods to remove modules
      
      * Formatting
      
      * add the tuple type to the support list
      
      * Eliminate unused modules
      
      * Formatting
      
      * Fix test errors
      
      * Foramtting
      
      * Fix tidy issues
      
      * fix problem related to inline subgraph
      
      * clang format
      
      * fix review comments
      
      * fix review comments
      
      * fix review comments
      
      * fix review comments
      
      * clang format
      
      * fix a unit test
      
      * one more code change
      
      * remove an optimization related to the if operator
      
      * clang format
      
      * fix review comments
      Co-authored-by: default avatarPaul <pfultz2@yahoo.com>
      Co-authored-by: default avatarmvermeulen <5479696+mvermeulen@users.noreply.github.com>
      bc52a8a8
  32. 25 Jun, 2021 3 commits
  33. 08 Jun, 2021 1 commit
    • Cagri Eryilmaz's avatar
      Reverse Op (#846) · 9c54fc4f
      Cagri Eryilmaz authored
      
      
      * init reverseOp branch: ref op + ref test. WIP
      
      * first passing basic test
      
      * cleanup
      
      * additional axis implementation
      
      * additional test
      
      * ref op implementation vec to int for axis
      
      * ref op test change for axis
      
      * initial gpu files and test
      
      * updates to implementation and test
      
      * fixed some issues
      
      * clang format
      
      * cleanup
      
      * formatting
      
      * removing comments
      
      * remove local size, back to default
      
      * update tests: replace with std functions
      
      * multiple axis for reverse op
      
      * fix a build error
      
      * clang format
      
      * more tests
      
      * fix a bug for the reverse device function
      
      * clang format
      
      * fix a bug
      
      * clang format
      
      * ref test updates, multiaxis
      
      * formatting
      Co-authored-by: default avatarShucai Xiao <Shucai.Xiao@amd.com>
      Co-authored-by: default avatarmvermeulen <5479696+mvermeulen@users.noreply.github.com>
      9c54fc4f
  34. 26 May, 2021 1 commit
    • Shucai Xiao's avatar
      Step op (#839) · 04065c64
      Shucai Xiao authored
      
      
      * add the operator step
      
      * clang formatJ
      
      * add unit tests
      
      * clang format
      
      * add more unit test for step op
      
      * clang format
      
      * add more unit tests
      
      * clang format
      
      * fix review comments
      
      * clang format
      
      * rename two unit tests
      Co-authored-by: default avatarPaul Fultz II <pfultz2@yahoo.com>
      04065c64
  35. 26 Apr, 2021 1 commit
    • turneram's avatar
      Prefix scan operator (#797) · e8ae23b1
      turneram authored
      
      
      * Add scan struct; add initial tests; initial algorithm by cases; refactor into one algorithm; clean up code
      
      * Rename; restructure; begin adding additional attributes
      
      * refactor to use shape_for_each; temporarily drop reverse mode
      
      * Add back reverse mode with shape_for_each_reverse; update tests; add axis bounds check
      
      * Begin adding to onnx parser
      
      * Add to onnx parser
      
      * Fix onnx test
      
      * Fix CI warnings
      
      * Update algorithm to use slice+par_for; update gen_onnx; remove .o files; remove redundant axis normalizing
      
      * Add exclusive mode
      
      * Add reverse mode
      
      * Remove .pyc file
      
      * Fix warning
      
      * Remove shape_for_each_reverse; clean up pointer usage for exclusive cases
      
      * Remove unused variable
      
      * Fix onnx test
      
      * Add test case to op_shape_test
      
      * Formatting
      
      * Formatting
      
      * Fix tidy warning
      
      * Formatting
      
      * Formatting
      
      * Formatting
      
      * Increase code coverage
      
      * Formatting
      
      * refine the script for creating the cumsum onnx file
      
      * Alphabetize includes for operators.hpp
      
      * Revise onnx test
      
      * Remove redundant bounds check
      
      * Formatting and style
      
      * Alphabetize tests
      
      * Remove duplicate tests from merge
      
      * Fix tidy warning for sub_test
      Co-authored-by: default avatarShucai Xiao <Shucai.Xiao@amd.com>
      Co-authored-by: default avatarmvermeulen <5479696+mvermeulen@users.noreply.github.com>
      e8ae23b1