1. 14 Sep, 2023 1 commit
  2. 08 Sep, 2023 2 commits
  3. 08 Aug, 2023 1 commit
  4. 03 Aug, 2023 1 commit
  5. 25 Jul, 2023 1 commit
  6. 19 Jul, 2023 2 commits
  7. 09 Jul, 2023 1 commit
  8. 06 Jul, 2023 1 commit
    • Paul Fultz II's avatar
      Enable eval to handle multiple contexts (#1751) · 072fd5cc
      Paul Fultz II authored
      This is to help enable multi-target execution. We store a vector of targets and contexts. Currently this will only compile a single target, the PR #1672 is needed to enable multiple targets.
      
      This will also serialize the targets and contexts.
      
      When using the execution_environment or prog.get_context() it will always use the context from the first target assuming this is the "primary" target. Although, its unlikely a user would use execution_environment with a multi-target environment.
      072fd5cc
  9. 30 Jun, 2023 1 commit
  10. 26 Jun, 2023 1 commit
  11. 14 Jun, 2023 1 commit
  12. 31 May, 2023 1 commit
  13. 30 May, 2023 1 commit
    • Paul Fultz II's avatar
      Improvements to driver output (#1710) · d32ab85b
      Paul Fultz II authored
      Use generate_argument instead of generate_literal for python output as generate_literal doesnt exists
      Shorten the names for variables from the main module
      Use prefix p_ for parameters
      Use shorter variable m for main module in python
      d32ab85b
  14. 06 Apr, 2023 1 commit
    • Charlie Lin's avatar
      Driver dynamic batch update (#1652) · adccec52
      Charlie Lin authored
      Examples..
      
      bin/driver verify /codes/onnx_models/resnet50-v1-7/resnet50-v1-7.onnx --split-single-dyn-dim --batch 3 --dyn-input-dim @data "[{min:1, max:4}, 3, 224, 224]"
      
      bin/driver compile /codes/onnx_models/resnet50-v1-7/resnet50-v1-7.onnx --split-single-dyn-dim --default-dyn-dim "{min:1, max:10}" --output resnet50_batch1-10.mxr
      
      bin/driver perf resnet50_batch1-10.mxr --batch 4
      adccec52
  15. 28 Feb, 2023 1 commit
    • Charlie Lin's avatar
      Select module op (#1569) · a63ee2e0
      Charlie Lin authored
      Creates the select_module operator that selects one of the submodules passed to it to run based on the submodule parameters.  The submodule is selected by having the exact same static shapes for the arguments to select_module as the parameters in the submodule
      a63ee2e0
  16. 16 Feb, 2023 1 commit
  17. 02 Feb, 2023 1 commit
  18. 14 Dec, 2022 1 commit
  19. 04 Oct, 2022 1 commit
  20. 06 Sep, 2022 1 commit
  21. 21 Aug, 2022 1 commit
    • varunsh's avatar
      Update is_supported (#1334) · 79e15ca9
      varunsh authored
      * Update is_supported
      * Return object from is_supported
      * Return by reference in interator
      79e15ca9
  22. 04 Aug, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref convolution op (#1224) · 67f77ac1
      Charlie Lin authored
      
      
      * Dynamic shape handling in shape object
      
      * rewrite empty lens multibroadcast test
      
      * Shape class changes to handle dynamic
      * More throw errors for functions that don't make sense for dynamic shape
      * Print output changes
      * Serialization changes
      
      * Fixing serialization errors
      
      * Remove const on dyn_dim copy getters
      
      * Dynamic shape tests
      
      * Fix serialize errors
      
      * Add dyn_data struct to avoid ambiguous constructor
      
      * Tidy fix: emplace_back() over for loop
      
      * Tidy fix: use move
      
      * Use std::initializer_list in constructor
      Reverts the dyn_data struct change
      Should get around the ambiguous braced initialization list error
      
      * avoid typedef
      
      * element_space, min,max,opt _lens change
      
      * formatting
      
      * Comments fix
      
      * dynamic bytes() test
      
      * Seralize and reflect changes
      
      * formatting
      
      * Test the dynamic lens functions
      
      * progress
      
      * Formatting
      
      * Dynamic conv draft progress
      
      * Add operator<< tests for coverage
      
      * Coverage update
      
      * Add to conv dynamic batch test
      
      * Dynamic image size test
      
      * Dynamic weight handling
      
      * Dyn image shape test change, fix dyn weight cond
      
      * Comment update
      
      * Dynamic weights shape test and fix
      
      * Use ternary operator
      
      * Tidy fixes
      
      * Handle dynamic graph input shapes in ONNX parser
      
      * Formatting
      
      * Handle dynamic shape for convolution
      
      * formatting
      
      * cppcheck fixes
      
      * Add onnx test files
      
      * Fix typo
      
      * Disable auto_pad for dynamic input shape
      
      * check_shapes object checks for allowing dynamic shapes
      
      * Fix any_of
      
      * Change to maintain const objectness
      
      * Formatting
      
      * Check shapes allow dynamic
      
      * Refactor compute_shape() call into op.compute()
      Allows for per operator differences with handling dynamic shape
      Fix operation.hpp change to use the generator
      
      * Comment fix
      
      * Refactor normalize_attributes() calls to use max_lens()
      
      * Comment addition
      
      * Update other normalize_attributes() calls
      
      * Change to using constructor and add tests
      
      * Use const member function
      
      * Add more dynamic shape support
      
      * Add tests for error code coverage
      
      * Fix opt shape bug and add shape tests
      
      * capture all by ref
      
      * Fix typo with img shape calculation
      
      * Add more tests
      
      * dynamic auto pad attempt
      Linker error with pad_calc.cpp
      
      * Fix parse dyn auto_pad
      Should only need to use dynamic auto pad when the image shape or kernel
      shape are dynamic. For a dynamic batch size, the auto pad calculation is
      the same.
      
      * Fix linking error
      
      * Fix auto_pad bug
      Fixed input tensor with auto_pad setting on
      
      * auto_pad onnx tests
      
      * Fix auto_pad calculation, evaluate in ref_conv
      add ref_ops tests
      
      * Add shape tests, fix bugs
      
      * Refactor first two output dynamic len calculation
      
      * Conv MLIR test update
      
      * i64 MLIR test fix
      
      * Fix MLIR test typo
      Co-authored-by: default avatarChris Austen <causten@users.noreply.github.com>
      67f77ac1
  23. 08 Jul, 2022 2 commits
  24. 06 Jul, 2022 1 commit
    • Paul Fultz II's avatar
      Verify load and save (#1265) · f2531606
      Paul Fultz II authored
      *In the verification tests, check that saving and reloading the program is the same program. This also fixes serialization to always load instructions in the same order. There is also fixes for deconv and quant_conv which didn't save the solution id, and was broken for serialization.
      f2531606
  25. 29 Jun, 2022 1 commit
  26. 26 Jun, 2022 1 commit
  27. 22 Jun, 2022 1 commit
  28. 11 Mar, 2022 1 commit
    • Shucai Xiao's avatar
      Improve print ins (#1096) · b3b44f5d
      Shucai Xiao authored
      The module::debug_print(ins) is very slow, which makes the trave_eval==1/2 very slow. The reason is printing an ins involves search the whole module to get the instruction, the print it.  This change is to fix that by calling module::print() to get names of all instructions of a program, then print the instruction by getting its name from a hash map.
      b3b44f5d
  29. 02 Mar, 2022 1 commit
  30. 02 Feb, 2022 1 commit
    • Paul Fultz II's avatar
      Update trace_eval to preview the output buffers (#1073) · b20e3d4d
      Paul Fultz II authored
      Currently, MIGRAPHX_TRACE_EVAL=2 prints out the entire output buffer, but this can produce a lot of output. To make it easier to inspect and debug, using MIGRAPHX_TRACE_EVAL=2 now only prints 10 elements from the buffer(the first 5 and last 5) and shows any fp classifications found in the buffer(ie nans, infinity, etc). The previous behavior can still be enabled with MIGRAPHX_TRACE_EVAL=3.
      b20e3d4d
  31. 15 Nov, 2021 1 commit
  32. 15 Oct, 2021 1 commit
    • Cagri's avatar
      Enabling rocTX markers for migraphx-driver via roctx knob (#946) · 4a71ec8c
      Cagri authored
      
      
      Added features:
      This enables wrapping each migraphx operator with rocTX markers.
      It adds new knob trace to migraphx-driver binary.
      
      Limitation:
      
      rocTX standalone does not output a file, it needs to be used with rocprof. Example command line:
      
      /opt/rocm/bin/rocprof -i ./in.txt --hip-trace --roctx-trace --flush-rate 10ms --timestamp on -d cagri_out --obj-tracking on /opt/rocm/bin/migraphx-driver trace ./resnet50-v2-7.onnx --onnx --gpu
      Co-authored-by: default avatarShucai Xiao <shucai@gmail.com>
      4a71ec8c
  33. 13 Oct, 2021 1 commit
    • Shucai Xiao's avatar
      Trace eval segfault (#974) · 337c5ba1
      Shucai Xiao authored
       when running a model on GPU, migraphx tries to print out content from gpu memory, which causes a segfault. The solution is to copy the gpu memory content back to CPU before the print.
      337c5ba1
  34. 16 Sep, 2021 1 commit
    • Shucai Xiao's avatar
      Loop operator (#853) · a275f590
      Shucai Xiao authored
      
      
      Add Loop operator for opset version 13.
      Notes: 1) Default max iteration number is 10 if no max iteration number is provided
      2) To change the max iter number, a user can set the max_loop_iterations in the onnx_option struct when parsing a model.
      3) The returned shape of the scan output is from the max_loop_iterations even the actual loop num is less than that. This issue also applies to other operators like NonZero and NonMaxSuppression. A issue #948 is created to track this and to be resolved later.
      Co-authored-by: default avatarPaul <pfultz2@yahoo.com>
      Co-authored-by: default avatarmvermeulen <5479696+mvermeulen@users.noreply.github.com>
      a275f590
  35. 10 Sep, 2021 1 commit
  36. 07 Sep, 2021 2 commits