- 07 Dec, 2023 1 commit
-
-
Umang Yadav authored
-
- 22 Nov, 2023 1 commit
-
-
Mirza Halilčević authored
Introduce dilations attribute to pooling operators reference implementation.
-
- 15 Nov, 2023 1 commit
-
-
shivadbhavsar authored
Reworked the simplify_qdq pass to support: Per-axis quantization (ie. allow 1D scales and zero points) Allow broadcast and transpose ops between dq and quant_op
-
- 28 Sep, 2023 1 commit
-
-
Umang Yadav authored
MIGraphX verification by default uses normalized RMS error as the basis for the verification. This change adds some logic to allow migraphx to do "np.allclose" type of elementwise verification using atol and rtol. Commit also includes changes to consistently pass "gold" or "expected" results as the second argument for "verify_range()" calls. Default RMS tolerance inside driver is set to 0.001 which IMO is high for FP32 compared to what we had earlier. Need better defaults
-
- 16 Jul, 2023 1 commit
-
-
Umang Yadav authored
-
- 07 Apr, 2023 1 commit
-
-
Paul Fultz II authored
Converts can be inserted when the scales and input differ in the onnx file(we are already doing this implicit conversion in the ref implementation). This will also improve the compile-time of quantizelinear.hpp since we can remove the nested visit method.
-
- 18 Mar, 2023 1 commit
-
-
Umang Yadav authored
Fixes #1595
-
- 27 Oct, 2022 1 commit
-
-
Chris Austen authored
Upgraded Dockerfiles and fixed tidy issues to make Ubuntu 20.04 and ROCm 5.3.0 the default
-
- 22 Jun, 2022 1 commit
-
-
Ted Themistokleous authored
Updated each source file in the repo with the existing license.
-
- 04 Mar, 2022 1 commit
-
-
bpickrel authored
Changed the pooling values for two structures from strings to specialized enum classes. Many test and operator parsing changes to support this. Introduces one new source file, op_enums.cpp.
-
- 08 Oct, 2021 1 commit
-
-
Umang Yadav authored
Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs. Aim is to have the definition of dot operator as C = A . B without having alpha or beta. In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
-
- 17 Sep, 2021 2 commits
-
-
Paul Fultz II authored
This reverts commit 9e43cb8b.
-
Umang Yadav authored
This PR aims to remove alpha and beta attributes from dot operator completely. Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs. Aim is to have the definition of dot operator as C = A . B without having alpha or beta. In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
-
- 24 Aug, 2021 1 commit
-
-
Umang Yadav authored
* rename broadcast and multibroadcast output_lens attribute to out_lens attribute, and change tests and source code to reflect the same * change the reshape attribute from dims to out_lens * change transpose attribute's name from dims to perm to reflect better meaning * use permutation instead of perm for transpose clang formaating * use dims instead of out_lens for reshape clang formatting
-
- 18 Aug, 2021 1 commit
-
-
turneram authored
* Add operators, refactor parsers, add rewrite passes, add tests * Add ref implementations * Move broadcasting of scales and zero points to onnx parser * Allow for x and zero_point to have different types in quantizelinear; fix zero_point default type * Switch certain variables to int64_t * Fix overflow in implicit constant conversion * Remove operators.hpp from includes in tf_test.cpp * Add conversion for int32 input to quantizelinear and add test case; remove operators.hpp from onnx_test.cpp includes * Switch dequantizelinear math from int32 to float * Remove changes to operators.hpp * Simplify apply_quantizelinear * Add verify test for int32 data * Add rewrite_quantization back to CMakeLists * Add passes to insert qdq after add_bias is applied, replace quant_ops, and remove remaining qdq pairs * Renaming, refactoring, cleaning up code, adding formal test, and adding passes to targets * Renaming, review comments, begin adding more specific tests * Add more specific unit tests * Fix failing test on CI * Correct matcher and update qop rewriting, update tests and add more tests * Update matcher, clean up simplify_qdq, tweak tests * Add tests, remove pass from CPU target, update dot parameters, clean up simplify_qdq * Fix correctness bug in ref q/dq implementations; edit gemm parser to make beta always 0.0 * Remove unused variables in onnx gemm tests
-