- 18 Nov, 2023 1 commit
-
-
Umang Yadav authored
-
- 17 Nov, 2023 23 commits
-
-
Umang Yadav authored
-
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
dependabot[bot] authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Zakor Gyula authored
-
Umang Yadav authored
Handles all 4 Fp8 dtypes listed here : https://onnx.ai/onnx/technical/float8.html Follows saturation/clipping logic from table there as well : https://onnx.ai/onnx/technical/float8.html#cast Only adding fp8e4m3fnuz in MIGraphX IR for now.
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
- 16 Nov, 2023 11 commits
-
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Artur Wojcik authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-
- 15 Nov, 2023 5 commits
-
-
shivadbhavsar authored
Reworked the simplify_qdq pass to support: Per-axis quantization (ie. allow 1D scales and zero points) Allow broadcast and transpose ops between dq and quant_op
-
nives-vukovic authored
Since ONNX opset version 14, layout attribute has been introduced to LSTM operator which allows two predefined layouts for input and output shapes. Add corresponding reference, onnx, and verify tests.
-
Umang Yadav authored
-
Umang Yadav authored
-
Umang Yadav authored
-