1. 17 Feb, 2022 1 commit
  2. 16 Feb, 2022 2 commits
  3. 11 Feb, 2022 2 commits
  4. 09 Feb, 2022 2 commits
  5. 08 Feb, 2022 3 commits
  6. 02 Feb, 2022 1 commit
    • Paul Fultz II's avatar
      Update trace_eval to preview the output buffers (#1073) · b20e3d4d
      Paul Fultz II authored
      Currently, MIGRAPHX_TRACE_EVAL=2 prints out the entire output buffer, but this can produce a lot of output. To make it easier to inspect and debug, using MIGRAPHX_TRACE_EVAL=2 now only prints 10 elements from the buffer(the first 5 and last 5) and shows any fp classifications found in the buffer(ie nans, infinity, etc). The previous behavior can still be enabled with MIGRAPHX_TRACE_EVAL=3.
      b20e3d4d
  7. 01 Feb, 2022 1 commit
  8. 31 Jan, 2022 1 commit
  9. 28 Jan, 2022 3 commits
  10. 27 Jan, 2022 1 commit
  11. 26 Jan, 2022 1 commit
    • turneram's avatar
      Add HardSwish op ONNX parser (#1066) · 7477aeb8
      turneram authored
      Add HardSwish to HardSigmoid parser
      
      HardSwish formula is y = x * HardSigmoid<alpha=1/6, beta=0.5>(x)
      HardSigmoid parser sets alpha to 1/6 and adds the mul instruction if op name is HardSwish
      
      Resolves #1062
      7477aeb8
  12. 21 Jan, 2022 4 commits
  13. 20 Jan, 2022 2 commits
  14. 17 Jan, 2022 1 commit
  15. 11 Jan, 2022 1 commit
    • turneram's avatar
      HardSigmoid ONNX parser (#1040) · fc42d852
      turneram authored
      Add HardSigmoid onnx parser and unit tests
      Produces mathematical equivalent to ONNX operator through combination of existing pointwise ops.
      Resolves #1028
      fc42d852
  16. 10 Jan, 2022 1 commit
  17. 05 Jan, 2022 1 commit
  18. 10 Dec, 2021 1 commit
  19. 09 Dec, 2021 2 commits
    • Shucai Xiao's avatar
      Softmax perf optimization (#1014) · 2e337c7f
      Shucai Xiao authored
      Changed the number of threads in a block from 256 to 128
      Increased the max number of blocks in the kernel from 256 to 1M.
      For the case that the axis is the last dimension, we removed the computation of index since it is not required.
      
      With these change, we can get about 2x speedup compared to the develop branch for the softmax op used in the BertSquad model.
      2e337c7f
    • Paul Fultz II's avatar
      Fuse last instruction in fuse_pointwise (#1015) · e758d457
      Paul Fultz II authored
      Fuse last instruction in fuse_pointwise
      This is also fixes a bug with using an invalid iterator.
      e758d457
  20. 08 Dec, 2021 1 commit
  21. 07 Dec, 2021 2 commits
    • Paul Fultz II's avatar
      Rename reduce_inputs to virtual_inputs (#1021) · 1793cc54
      Paul Fultz II authored
      simple variable rename
      1793cc54
    • Shucai Xiao's avatar
      Test runner match input output using tensor names (#996) · 0f9b4072
      Shucai Xiao authored
      1. Previous implementation assumes inputs and outputs .pb files are ordered, but it is not the case. So, we should use the name of the tensors in the input/output .pb files to match the input and output in the onnx model. (This change applies to the BERT_Squad model)
      2. When parsing a model with dynamic input shape, current implementation uses the default batch_size for the unknown dims, which can cause parsing error for some cases (e.g. mask_rcnn model). The solution is we first read an input to get the shape, then use these shapes to parse the onnx model.
      0f9b4072
  22. 05 Dec, 2021 1 commit
  23. 02 Dec, 2021 1 commit
  24. 30 Nov, 2021 2 commits
  25. 25 Nov, 2021 2 commits