1. 18 Mar, 2022 1 commit
  2. 15 Mar, 2022 1 commit
    • Paul Fultz II's avatar
      Add iterators to kernels tensor_view and fix roialign to work with non-standard shape (#1126) · 31e63991
      Paul Fultz II authored
      This adds iterators to tensor_view, which can allow kernels to work with non-standard shapes like for roialign.
      
      To improve the performance of indexing when using the iterators, the shape class was updated to use integral_constants since the compiler doesn't always fold the const values. An integral_constant will at least enforce that in the AST.
      
      Finally, since index calculations with single integers are improved, I also updated pointwise to use single index rather than multi index. There is about 4% improvement in some cases.
      31e63991
  3. 14 Mar, 2022 1 commit
  4. 04 Mar, 2022 1 commit
    • bpickrel's avatar
      Mode as enum for pooling and roi_align (#1091) · a2e90b5d
      bpickrel authored
      Changed the pooling values for two structures from strings to specialized enum classes. Many test and operator parsing changes to support this. Introduces one new source file, op_enums.cpp.
      a2e90b5d
  5. 03 Mar, 2022 3 commits
  6. 02 Mar, 2022 2 commits
  7. 25 Feb, 2022 1 commit
  8. 24 Feb, 2022 1 commit
    • Paul Fultz II's avatar
      Some cmake fixes and updates (#1088) · cd0a4aa5
      Paul Fultz II authored
      Make doc/CMakeLists.txt standalone
      Switch to use rocm-cmake modules for document generation
      Add CONFIGURE_DEPENDS to file(GLOB) so it will update without an explicit cmake run
      Add STRINGS property for build type to make it easier to switch build types with ccmake
      Various fixes and improvements
      cd0a4aa5
  9. 09 Feb, 2022 1 commit
  10. 08 Feb, 2022 2 commits
  11. 28 Jan, 2022 1 commit
  12. 27 Jan, 2022 1 commit
  13. 21 Jan, 2022 1 commit
  14. 10 Jan, 2022 1 commit
  15. 09 Dec, 2021 1 commit
    • Shucai Xiao's avatar
      Softmax perf optimization (#1014) · 2e337c7f
      Shucai Xiao authored
      Changed the number of threads in a block from 256 to 128
      Increased the max number of blocks in the kernel from 256 to 1M.
      For the case that the axis is the last dimension, we removed the computation of index since it is not required.
      
      With these change, we can get about 2x speedup compared to the develop branch for the softmax op used in the BertSquad model.
      2e337c7f
  16. 08 Dec, 2021 1 commit
  17. 07 Dec, 2021 1 commit
  18. 02 Dec, 2021 1 commit
  19. 30 Nov, 2021 2 commits
  20. 24 Nov, 2021 1 commit
  21. 18 Nov, 2021 1 commit
  22. 11 Nov, 2021 1 commit
    • Paul Fultz II's avatar
      Conditionally enable pointwise fusion (#992) · 157935ff
      Paul Fultz II authored
      This enables the pointwise fusions using the MIGRAPHX_ENABLE_POINTWISE_FUSION env variable. Its disabled by default since MIOpen fusions need to be refactored.
      
      This also adds a compile_ops pass to compile the pointwise modules. All tests except test_gpu_fast_math passes with MIGRAPHX_ENABLE_POINTWISE_FUSION=1 set.
      157935ff
  23. 09 Nov, 2021 1 commit
  24. 28 Oct, 2021 2 commits
  25. 20 Oct, 2021 1 commit
    • Shucai Xiao's avatar
      Roialign (#952) · d7653732
      Shucai Xiao authored
      Implementation of the roialign operator. For now, we have only the ref implementation. When we run a model on the GPU, we fall back the execution to use the ref implementation.
      d7653732
  26. 08 Oct, 2021 2 commits
  27. 01 Oct, 2021 1 commit
    • turneram's avatar
      Add multinomial op (#954) · 0b7672d7
      turneram authored
      
      
      Add multinomial op to onnx parser with ref and GPU implementations.
      
      The onnx parser inserts a literal of shape {batch_size, sample_size} with random values in the range [0, 1) and inserts existing ops to compute the cumulative density function. The multinomial operator multiplies the random values by the sum of the CDF and returns the index of the first element of the CDF that is greater than the result, representing samples randomly drawn from [0, class_size) that follow the log-probability distribution.
      
      Resolves #821
      Co-authored-by: default avatarShucai Xiao <shucai@gmail.com>
      0b7672d7
  28. 27 Sep, 2021 1 commit
  29. 17 Sep, 2021 2 commits
    • Paul Fultz II's avatar
      985f58b0
    • Umang Yadav's avatar
      Remove alpha and beta attributes from dot operator (#945) · 9e43cb8b
      Umang Yadav authored
      This PR aims to remove alpha and beta attributes from dot operator completely.
      
      Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs.
      
      Aim is to have the definition of dot operator as C = A . B without having alpha or beta.
      
      In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
      9e43cb8b
  30. 16 Sep, 2021 1 commit
    • Shucai Xiao's avatar
      Loop operator (#853) · a275f590
      Shucai Xiao authored
      
      
      Add Loop operator for opset version 13.
      Notes: 1) Default max iteration number is 10 if no max iteration number is provided
      2) To change the max iter number, a user can set the max_loop_iterations in the onnx_option struct when parsing a model.
      3) The returned shape of the scan output is from the max_loop_iterations even the actual loop num is less than that. This issue also applies to other operators like NonZero and NonMaxSuppression. A issue #948 is created to track this and to be resolved later.
      Co-authored-by: default avatarPaul <pfultz2@yahoo.com>
      Co-authored-by: default avatarmvermeulen <5479696+mvermeulen@users.noreply.github.com>
      a275f590
  31. 10 Sep, 2021 1 commit
  32. 02 Sep, 2021 1 commit
    • turneram's avatar
      Refactor where op (#918) · ebbaf8fc
      turneram authored
      Implement the Where operator for the CPU and GPU.  This is for better performance.
      ebbaf8fc