- 22 Oct, 2022 1 commit
-
-
Paul authored
-
- 21 Oct, 2022 2 commits
- 20 Oct, 2022 7 commits
- 17 Oct, 2022 6 commits
- 16 Oct, 2022 1 commit
-
-
Paul authored
-
- 12 Oct, 2022 2 commits
- 09 Oct, 2022 6 commits
- 08 Oct, 2022 2 commits
- 07 Oct, 2022 4 commits
- 04 Oct, 2022 1 commit
-
-
Paul Fultz II authored
optimize the softmax operator
-
- 29 Sep, 2022 1 commit
-
-
Umang Yadav authored
Improvements/Additions to be made: changes for the quant_convolution, changes for the deconvolution, Macros for MIOpen status checks
-
- 26 Sep, 2022 1 commit
-
-
Paul Fultz II authored
-
- 21 Sep, 2022 1 commit
-
-
kahmed10 authored
This PR allows for other values of epsilon to be matched when finding layernorm. Similarly, the calculation now uses the variable for epsilon.
-
- 14 Sep, 2022 1 commit
-
-
Paul Fultz II authored
* Implement concat using jit compilation
-
- 08 Sep, 2022 1 commit
-
-
Paul Fultz II authored
* Remove unused headers
-
- 17 Aug, 2022 1 commit
-
-
Paul Fultz II authored
-
- 25 Jul, 2022 1 commit
-
-
Ted Themistokleous authored
* Add in changes for onnx Mod operator Initial operator for mod implementation and test cases for integer and floating based types. Need to use fmod from stdlib for floating point types. half_float::half thankfully is specced to the use the existing std::fmod() call when looking at the half.hpp implementation. fmod_flag should mirror the onnx fmod attribute. Right now using a floating point type without setting that on the user side to true will result in an exception. Ref ticket #1283
-
- 05 Jul, 2022 1 commit
-
-
Paul Fultz II authored
* Add softmax kernel
-