- 29 Sep, 2023 1 commit
-
-
charlie authored
-
- 26 Sep, 2023 1 commit
-
-
charlie authored
Softmax fp32 propagate_constant fp64 layernorm fp32
-
- 02 Jul, 2023 1 commit
-
-
Umang Yadav authored
-
- 22 Jun, 2022 1 commit
-
-
Ted Themistokleous authored
Updated each source file in the repo with the existing license.
-
- 07 Sep, 2021 1 commit
-
-
Shucai Xiao authored
Add operators, refactor parsers, add rewrite passes, add tests Add ref implementations Move broadcasting of scales and zero points to onnx parser Allow for x and zero_point to have different types in quantizelinear; fix zero_point default type fp16 and fp8 quantization to include subgraph and parameters fix unit test to use qdq operators for int8 quantization Co-authored-by:turneram <alturner@amd.com>
-