1. 22 Feb, 2022 2 commits
  2. 19 Feb, 2022 2 commits
  3. 16 Feb, 2022 2 commits
  4. 11 Feb, 2022 1 commit
  5. 09 Feb, 2022 2 commits
  6. 08 Feb, 2022 2 commits
  7. 02 Feb, 2022 1 commit
    • Paul Fultz II's avatar
      Update trace_eval to preview the output buffers (#1073) · b20e3d4d
      Paul Fultz II authored
      Currently, MIGRAPHX_TRACE_EVAL=2 prints out the entire output buffer, but this can produce a lot of output. To make it easier to inspect and debug, using MIGRAPHX_TRACE_EVAL=2 now only prints 10 elements from the buffer(the first 5 and last 5) and shows any fp classifications found in the buffer(ie nans, infinity, etc). The previous behavior can still be enabled with MIGRAPHX_TRACE_EVAL=3.
      b20e3d4d
  8. 31 Jan, 2022 1 commit
  9. 28 Jan, 2022 2 commits
  10. 27 Jan, 2022 1 commit
  11. 26 Jan, 2022 1 commit
    • turneram's avatar
      Add HardSwish op ONNX parser (#1066) · 7477aeb8
      turneram authored
      Add HardSwish to HardSigmoid parser
      
      HardSwish formula is y = x * HardSigmoid<alpha=1/6, beta=0.5>(x)
      HardSigmoid parser sets alpha to 1/6 and adds the mul instruction if op name is HardSwish
      
      Resolves #1062
      7477aeb8
  12. 21 Jan, 2022 4 commits
  13. 17 Jan, 2022 1 commit
  14. 11 Jan, 2022 1 commit
    • turneram's avatar
      HardSigmoid ONNX parser (#1040) · fc42d852
      turneram authored
      Add HardSigmoid onnx parser and unit tests
      Produces mathematical equivalent to ONNX operator through combination of existing pointwise ops.
      Resolves #1028
      fc42d852
  15. 10 Jan, 2022 1 commit
  16. 05 Jan, 2022 1 commit
  17. 09 Dec, 2021 2 commits
    • Shucai Xiao's avatar
      Softmax perf optimization (#1014) · 2e337c7f
      Shucai Xiao authored
      Changed the number of threads in a block from 256 to 128
      Increased the max number of blocks in the kernel from 256 to 1M.
      For the case that the axis is the last dimension, we removed the computation of index since it is not required.
      
      With these change, we can get about 2x speedup compared to the develop branch for the softmax op used in the BertSquad model.
      2e337c7f
    • Paul Fultz II's avatar
      Fuse last instruction in fuse_pointwise (#1015) · e758d457
      Paul Fultz II authored
      Fuse last instruction in fuse_pointwise
      This is also fixes a bug with using an invalid iterator.
      e758d457
  18. 08 Dec, 2021 1 commit
  19. 07 Dec, 2021 1 commit
  20. 02 Dec, 2021 1 commit
  21. 30 Nov, 2021 2 commits
  22. 25 Nov, 2021 1 commit
    • Shucai Xiao's avatar
      Non std shape auto contiguous (#1001) · 2d4dcc47
      Shucai Xiao authored
      Resolves a problem in parsing the ssd-10 model.
      
      The problem is, after inserting contiguous in the auto_contiguous pass, standard output shape of some operators becomes non-standard. Then, if the next operator requires standard input shape, an exception is throw.
      
      For example, if we pass the following model:
      Input (standard shape) -> transpose (transposed) -> softmax (transposed) -> transpose (standard) -> gather.
      It works fine, and no contiguous is required.
      
      In the auto_contiguous pass, a contiguous is inserted after the first transpose. Then we need to replace the first transpose with the contiguous and recompute all shapes. When it comes to the gather operator, its input is a transposed shape, and an exception is thrown.
      
      The solution is in the recompute_shape() function. If it is called by the auto_contiguous pass and shape of an instruction is changed, and the shape is non_standard, we do not recompute shape of its output. The reason is: since its output shape is non_standard, a contiguous op will be added after the instruction, which will recompute shape for later operators.
      2d4dcc47
  23. 24 Nov, 2021 1 commit
  24. 22 Nov, 2021 1 commit
  25. 18 Nov, 2021 1 commit
  26. 17 Nov, 2021 1 commit
    • Paul Fultz II's avatar
      Handle removing contiguous on operators that use modules (#1005) · 785307c3
      Paul Fultz II authored
      Currently, eliminate_contiguous will never remove contiguous for operators that use module inputs due to the fact that it doesn't pass the module inputs to compute_shape.
      
      - Update to pass the module inputs correctly to compute_shape
      - Fix the overloads of compute_shape so that when passed an empty vector of module inputs it will call the overload without module inputs
      - Add tests with contiguous and pointwise module function.
      - Move add_pointwise function to a seperate header to reuse across different tests
      785307c3
  27. 15 Nov, 2021 3 commits