1. 10 Jun, 2022 1 commit
  2. 07 Jun, 2022 1 commit
  3. 03 Jun, 2022 1 commit
    • Paul Fultz II's avatar
      Group code objects by kernel name in perf report summary (#1234) · 7271ddbc
      Paul Fultz II authored
      Break up the gpu::code_object  print to show the actual kernels...
      
      gpu::code_object::add_kernel: 0.646121ms, 5%
      gpu::code_object::mul_kernel: 0.623822ms, 5%
      gpu::code_object::add_mul_erf_add_mul_mul_kernel: 0.498902ms, 4%
      gpu::code_object::mul_add_kernel: 0.478352ms, 4%
      7271ddbc
  4. 02 Jun, 2022 2 commits
  5. 01 Jun, 2022 1 commit
  6. 31 May, 2022 1 commit
  7. 30 May, 2022 1 commit
    • shivadbhavsar's avatar
      Improve eliminate contiguous pass (#1223) · 86061b4d
      shivadbhavsar authored
      Following up on issue #1166 and PR #1220. Using the same approach as in #1220 for parallelizing the eval calls, we can significantly reduce the time spent on eliminate_contiguous pass.
      86061b4d
  8. 27 May, 2022 1 commit
  9. 26 May, 2022 2 commits
    • shivadbhavsar's avatar
      Parallelize evaluations in propagate_constant (#1220) · bf603a76
      shivadbhavsar authored
      Addressing issue #1166 - propagate_constant pass currently uses a recursive approach to find all instructions in a module that can be evaluated to a literal and performs the replacement in the same call.
      
      New approach:
      
      Perform single pass though instructions in the module to determine which instructions can be evaluated
      Evaluate selected instructions in parallel
      Replace the selected instructions with the corresponding literal
      bf603a76
    • Paul Fultz II's avatar
      Upgrade to cppcheck 2.8 and fix new issues found (#1225) · a401e72a
      Paul Fultz II authored
      * Upgrade to cppcheck 2.8
      a401e72a
  10. 25 May, 2022 2 commits
  11. 24 May, 2022 4 commits
  12. 20 May, 2022 2 commits
  13. 17 May, 2022 1 commit
  14. 13 May, 2022 1 commit
    • Chris Austen's avatar
      Update install_prereqs.sh for individual use (#1197) · 8c94ad07
      Chris Austen authored
      Our documentation indicates a user with sudo can run the install_prereqs.sh file. Turns out that the file is not complete enough to run on Ubuntu 18.04/20.04 independently. I updated the file to resolve the failures.
      
      resolves #1191
      8c94ad07
  15. 11 May, 2022 2 commits
  16. 10 May, 2022 1 commit
  17. 09 May, 2022 1 commit
  18. 06 May, 2022 2 commits
  19. 05 May, 2022 1 commit
    • Paul Fultz II's avatar
      Cppcheck fixes (#1195) · d582425b
      Paul Fultz II authored
      Fixes the #error when using cppcheck. This no longer suppresses cppcheck errors when including those errors. This fixes the cppcheck errors that was there already.
      d582425b
  20. 03 May, 2022 1 commit
  21. 02 May, 2022 1 commit
  22. 29 Apr, 2022 1 commit
  23. 27 Apr, 2022 1 commit
    • Paul Fultz II's avatar
      Add lane reduction (#1180) · 4c72cc95
      Paul Fultz II authored
      With reductions such as {2048, 2, 1456} on axes 1, this is 23x faster than using our new block_reduce, and its even over 100x faster than our original reduce_sum:
      
      # lane
      gpu::code_object[code_object=13736,symbol_name=kernel,global=2981888,local=1024,]: 0.0672928ms
      # block
      gpu::code_object[code_object=13800,symbol_name=kernel,global=39321600,local=64,]: 1.46072ms
      # original
      gpu::reduce_sum[axes={1}]: 6.73456ms
      There is some basic logic to pick between lane and block reduce automatically.
      4c72cc95
  24. 26 Apr, 2022 1 commit
  25. 23 Apr, 2022 1 commit
    • Charlie Lin's avatar
      ReverseSequence op (#1177) · 31906785
      Charlie Lin authored
      Implements the ReverseSequence ONNX operator as a parser.
      
      This parser can only handle a constant sequence_lens input. This is the same as what is handled for TensorRT as far as I can tell.
      We could handle a variable sequence_lens input; that would require ref and GPU implementations of the operator.
      The ONNX backend tests are disabled because this does not handle variable sequence_lens.
      31906785
  26. 19 Apr, 2022 1 commit
    • Charlie Lin's avatar
      Refactor Pooling and implement ONNX LpPool and GlobalLpPool (#1152) · 764273e4
      Charlie Lin authored
      Refactored the reference implementation of pooling to something like what was done for roialign. Moved the reference implementation of pooling from targets/ref/lowering.cpp to pooling.hpp.
      Removed cpu_pooling, instead using reference pooling in pooling.hpp
      Added reference implementation of Lp Norm pooling and the global version
      Added tests for the Lp Norm Pooling
      764273e4
  27. 17 Apr, 2022 1 commit
    • Paul Fultz II's avatar
      Reduce with runtime compilation (#1150) · f9a5b81e
      Paul Fultz II authored
      There is significant improvement on larger tensors with half almost 50% faster:
      
      lens: [1024, 384, 768]
      gpu::code_object[code_object=13832,symbol_name=kernel,global=39321600,local=256,]: 1.16685ms
      gpu::reduce_sum[axes={2}]: 1.73126ms
      Also for non-trivial layouts this can sometimes be over 2x faster:
      
      lens: [64, 1024, 768, 4]
      gpu::code_object[code_object=13832,symbol_name=kernel,global=39321600,local=256,]: 1.1706ms
      gpu::reduce_sum[axes={1}]: 2.63375ms
      Of course if the stride becomes larger this speed improvement diminishes due to poor memory access patterns. A lane_reduce instead of a block_reduce is needed for such type of kernels. I plan to address that in a future PR.
      
      Finally, this also includes a MIGRAPHX_GPU_DUMP_ASM env variable which will print out the assembly when the kernel compiles.
      f9a5b81e
  28. 14 Apr, 2022 2 commits
    • bpickrel's avatar
      Half2 overloads (#1157) · 12007dba
      bpickrel authored
      Issue 1127 Updates the math.hpp header file to perform overloads of various standard functions (ops) for the hip half2 type. The half2 type is two 16-bit floats packed into a 32-bit number and therefore the overloads act on vectors of sizes that are multiples of 2. They are invoked in runtime compilation any time one of the ops is called on a tensor declared with the data type shape::half_type.
      
      Defined new template, made instances of the template for those math operations that the hip library contains, added verify tests for the sqrt operator for three cases:
      
      tensor size not divisible by 2
      tensor size divisible by 2 but not by 4
      tensor size divisible by 4
      12007dba
    • kahmed10's avatar
      Fix file download for resnet50 example (#1164) · a930f1d5
      kahmed10 authored
      update path for where file is located
      a930f1d5
  29. 13 Apr, 2022 1 commit
  30. 12 Apr, 2022 1 commit