"...nni_cmd/tests/config_files/invalid/wrong-class-args.yml" did not exist on "4e0ad45a39073b751595d6e82c2ea53b0229386e"
  1. 27 Sep, 2022 1 commit
  2. 08 Sep, 2022 1 commit
  3. 06 Sep, 2022 1 commit
  4. 23 Aug, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref NMS (#1288) · fa3c21fa
      Charlie Lin authored
      Has NMS op output a dynamic shape (ONNX spec behavior)
      Allows for dynamic input shape to NMS op
      fa3c21fa
  5. 04 Aug, 2022 1 commit
    • Charlie Lin's avatar
      Dynamic ref convolution op (#1224) · 67f77ac1
      Charlie Lin authored
      
      
      * Dynamic shape handling in shape object
      
      * rewrite empty lens multibroadcast test
      
      * Shape class changes to handle dynamic
      * More throw errors for functions that don't make sense for dynamic shape
      * Print output changes
      * Serialization changes
      
      * Fixing serialization errors
      
      * Remove const on dyn_dim copy getters
      
      * Dynamic shape tests
      
      * Fix serialize errors
      
      * Add dyn_data struct to avoid ambiguous constructor
      
      * Tidy fix: emplace_back() over for loop
      
      * Tidy fix: use move
      
      * Use std::initializer_list in constructor
      Reverts the dyn_data struct change
      Should get around the ambiguous braced initialization list error
      
      * avoid typedef
      
      * element_space, min,max,opt _lens change
      
      * formatting
      
      * Comments fix
      
      * dynamic bytes() test
      
      * Seralize and reflect changes
      
      * formatting
      
      * Test the dynamic lens functions
      
      * progress
      
      * Formatting
      
      * Dynamic conv draft progress
      
      * Add operator<< tests for coverage
      
      * Coverage update
      
      * Add to conv dynamic batch test
      
      * Dynamic image size test
      
      * Dynamic weight handling
      
      * Dyn image shape test change, fix dyn weight cond
      
      * Comment update
      
      * Dynamic weights shape test and fix
      
      * Use ternary operator
      
      * Tidy fixes
      
      * Handle dynamic graph input shapes in ONNX parser
      
      * Formatting
      
      * Handle dynamic shape for convolution
      
      * formatting
      
      * cppcheck fixes
      
      * Add onnx test files
      
      * Fix typo
      
      * Disable auto_pad for dynamic input shape
      
      * check_shapes object checks for allowing dynamic shapes
      
      * Fix any_of
      
      * Change to maintain const objectness
      
      * Formatting
      
      * Check shapes allow dynamic
      
      * Refactor compute_shape() call into op.compute()
      Allows for per operator differences with handling dynamic shape
      Fix operation.hpp change to use the generator
      
      * Comment fix
      
      * Refactor normalize_attributes() calls to use max_lens()
      
      * Comment addition
      
      * Update other normalize_attributes() calls
      
      * Change to using constructor and add tests
      
      * Use const member function
      
      * Add more dynamic shape support
      
      * Add tests for error code coverage
      
      * Fix opt shape bug and add shape tests
      
      * capture all by ref
      
      * Fix typo with img shape calculation
      
      * Add more tests
      
      * dynamic auto pad attempt
      Linker error with pad_calc.cpp
      
      * Fix parse dyn auto_pad
      Should only need to use dynamic auto pad when the image shape or kernel
      shape are dynamic. For a dynamic batch size, the auto pad calculation is
      the same.
      
      * Fix linking error
      
      * Fix auto_pad bug
      Fixed input tensor with auto_pad setting on
      
      * auto_pad onnx tests
      
      * Fix auto_pad calculation, evaluate in ref_conv
      add ref_ops tests
      
      * Add shape tests, fix bugs
      
      * Refactor first two output dynamic len calculation
      
      * Conv MLIR test update
      
      * i64 MLIR test fix
      
      * Fix MLIR test typo
      Co-authored-by: default avatarChris Austen <causten@users.noreply.github.com>
      67f77ac1
  6. 25 Jul, 2022 1 commit
    • Ted Themistokleous's avatar
      Add onnx mod operator (#1302) · 77e80b8e
      Ted Themistokleous authored
      * Add in changes for onnx Mod operator
      
      Initial operator for mod implementation and test cases for integer and floating based types.
      
      Need to use fmod from stdlib for floating point types. half_float::half thankfully is specced to the use the existing std::fmod() call when looking at the half.hpp implementation.
      
      fmod_flag should mirror the onnx fmod attribute. Right now using a floating point type without setting that on the user side to true will result in an exception.
      
      Ref ticket #1283 
      77e80b8e
  7. 19 Jul, 2022 1 commit
    • Charlie Lin's avatar
      Fix op includes (#1308) · 39b307b2
      Charlie Lin authored
      Changes to operator includes:
      
      removed some includes that were not used
      included argument.hpp where clang-tidy wanted it
      39b307b2
  8. 12 Jul, 2022 1 commit
  9. 07 Jul, 2022 1 commit
    • Paul Fultz II's avatar
      Add a step to unsqeeze axis (#1242) · bd503d89
      Paul Fultz II authored
      Instead of just unsqueezing to an axis of 1 a step can be set to use instead. So instead of unsqueezing {3, 12} to {3, 1, 12} a step of 2 will unsqeeze to {3, 2, 6} instead
      bd503d89
  10. 29 Jun, 2022 1 commit
  11. 22 Jun, 2022 1 commit
  12. 17 Jun, 2022 1 commit
    • kahmed10's avatar
      Create allocate op and replace_allocate pass (#1183) · add6fb3b
      kahmed10 authored
      
      
      * add allocate op header
      
      * formatting
      
      * add replace_allocate pass
      
      * formatting
      
      * move output param to remove_allocate pass
      
      * formatting
      
      * fix bugs in replace_allocate pass
      
      * formatting
      
      * fix verify if tests
      
      * formatting
      
      * move if op logic
      
      * formatting
      
      * cleanup lowering
      
      * cleanup lowering
      
      * formatting
      
      * fix tidy
      
      * formatting
      
      * fix tidy
      
      * add cpu allocate check
      
      * formatting
      
      * change cpu allocate in pass
      
      * formatting
      
      * add some tests for replace_allocate pass
      
      * formatting
      
      * pass by ref
      
      * fix run_pass
      
      * formatting
      
      * update variable name for module
      
      * update dce to use contains() and fix tidy
      
      * formatting
      
      * update cppcheck
      
      * add if test
      
      * formatting
      
      * add if test
      
      * rename var to mod_output_names
      
      * formatting
      
      * remove conditional
      
      * update allocate op and tests
      
      * formatting
      
      * update replace_allocate tests
      
      * update create_output_names() and conditional in replace_allocate
      
      * formatting
      
      * remove extra variable in replace_allocate
      
      * update tools script for allocation_model
      Co-authored-by: default avatarUmang Yadav <29876643+umangyadav@users.noreply.github.com>
      Co-authored-by: default avatarChris Austen <causten@users.noreply.github.com>
      Co-authored-by: default avatarPaul Fultz II <pfultz2@yahoo.com>
      add6fb3b
  13. 29 Apr, 2022 1 commit
  14. 19 Apr, 2022 1 commit
    • Charlie Lin's avatar
      Refactor Pooling and implement ONNX LpPool and GlobalLpPool (#1152) · 764273e4
      Charlie Lin authored
      Refactored the reference implementation of pooling to something like what was done for roialign. Moved the reference implementation of pooling from targets/ref/lowering.cpp to pooling.hpp.
      Removed cpu_pooling, instead using reference pooling in pooling.hpp
      Added reference implementation of Lp Norm pooling and the global version
      Added tests for the Lp Norm Pooling
      764273e4
  15. 11 Apr, 2022 1 commit
    • bpickrel's avatar
      scatter operator refactoring to include reduction (#1124) · 701c2014
      bpickrel authored
      Change the "scatter" struct and op to a base/child set of three: scatter_none, scatter_add, scatter_mul to mirror Onnx' ScatterElements op. and its three reduction options. (Onnx Scatter op is deprecated and is equivalent to scatter_none.)
      
      Provides both a reference op. and update to Onnx parsing. Tests updated and new test case added.
      701c2014
  16. 29 Mar, 2022 1 commit
    • Paul Fultz II's avatar
      Refactor runtime compiled kernels to use the same compile_ops pipeline (#1125) · 661046c6
      Paul Fultz II authored
      This adds the infrastructure so we can compile everything in parallel, whereas before only pointwise kernels were compiled in parallel. This will also directly integrate with lowering and the gpu-driver. The kernels for pointwise and roialign are using this infrastructure. Scatternd is not since it does require standard shape.
      
      This also makes it easier to add new runtime compiled kernels in the future.
      661046c6
  17. 22 Mar, 2022 1 commit
  18. 18 Mar, 2022 1 commit
  19. 04 Mar, 2022 1 commit
    • bpickrel's avatar
      Mode as enum for pooling and roi_align (#1091) · a2e90b5d
      bpickrel authored
      Changed the pooling values for two structures from strings to specialized enum classes. Many test and operator parsing changes to support this. Introduces one new source file, op_enums.cpp.
      a2e90b5d
  20. 03 Mar, 2022 1 commit
  21. 02 Mar, 2022 2 commits
  22. 23 Feb, 2022 1 commit
    • Shucai Xiao's avatar
      Keep std shape (#1059) · 98dfdf15
      Shucai Xiao authored
      This PR is the resolve two problems in the issue#999, i.e., non_standard_shape input to reshape and reduce_mean.
      Three fixes:
      
      Any operator that has a standard shape requirement will add a contiguous input for its input.
      Eliminate_contiguous, when computing whether a contiguous can be removed, we should use all the updated args, not just the one that is being checked.
      In two optimization in the simplify_reshape, we remove the contiguous in the reshaper name list, since eliminate_contiguous will remove the contiguous if it can be removed.
      the solution is add an attribute to the operator that requires standard input shape, then in the auto_contiguous pass, add a contiguous to every input of such operators.
      98dfdf15
  23. 16 Feb, 2022 1 commit
  24. 09 Feb, 2022 1 commit
  25. 27 Jan, 2022 1 commit
  26. 17 Jan, 2022 1 commit
  27. 08 Dec, 2021 1 commit
  28. 25 Nov, 2021 1 commit
    • Shucai Xiao's avatar
      Non std shape auto contiguous (#1001) · 2d4dcc47
      Shucai Xiao authored
      Resolves a problem in parsing the ssd-10 model.
      
      The problem is, after inserting contiguous in the auto_contiguous pass, standard output shape of some operators becomes non-standard. Then, if the next operator requires standard input shape, an exception is throw.
      
      For example, if we pass the following model:
      Input (standard shape) -> transpose (transposed) -> softmax (transposed) -> transpose (standard) -> gather.
      It works fine, and no contiguous is required.
      
      In the auto_contiguous pass, a contiguous is inserted after the first transpose. Then we need to replace the first transpose with the contiguous and recompute all shapes. When it comes to the gather operator, its input is a transposed shape, and an exception is thrown.
      
      The solution is in the recompute_shape() function. If it is called by the auto_contiguous pass and shape of an instruction is changed, and the shape is non_standard, we do not recompute shape of its output. The reason is: since its output shape is non_standard, a contiguous op will be added after the instruction, which will recompute shape for later operators.
      2d4dcc47
  29. 11 Nov, 2021 1 commit
    • Paul Fultz II's avatar
      Conditionally enable pointwise fusion (#992) · 157935ff
      Paul Fultz II authored
      This enables the pointwise fusions using the MIGRAPHX_ENABLE_POINTWISE_FUSION env variable. Its disabled by default since MIOpen fusions need to be refactored.
      
      This also adds a compile_ops pass to compile the pointwise modules. All tests except test_gpu_fast_math passes with MIGRAPHX_ENABLE_POINTWISE_FUSION=1 set.
      157935ff
  30. 05 Nov, 2021 1 commit
  31. 28 Oct, 2021 1 commit
  32. 20 Oct, 2021 1 commit
    • Shucai Xiao's avatar
      Roialign (#952) · d7653732
      Shucai Xiao authored
      Implementation of the roialign operator. For now, we have only the ref implementation. When we run a model on the GPU, we fall back the execution to use the ref implementation.
      d7653732
  33. 19 Oct, 2021 1 commit
  34. 18 Oct, 2021 1 commit
  35. 08 Oct, 2021 2 commits
  36. 01 Oct, 2021 1 commit
    • turneram's avatar
      Add multinomial op (#954) · 0b7672d7
      turneram authored
      
      
      Add multinomial op to onnx parser with ref and GPU implementations.
      
      The onnx parser inserts a literal of shape {batch_size, sample_size} with random values in the range [0, 1) and inserts existing ops to compute the cumulative density function. The multinomial operator multiplies the random values by the sum of the CDF and returns the index of the first element of the CDF that is greater than the result, representing samples randomly drawn from [0, class_size) that follow the log-probability distribution.
      
      Resolves #821
      Co-authored-by: default avatarShucai Xiao <shucai@gmail.com>
      0b7672d7
  37. 17 Sep, 2021 2 commits
    • Paul Fultz II's avatar
      985f58b0
    • Umang Yadav's avatar
      Remove alpha and beta attributes from dot operator (#945) · 9e43cb8b
      Umang Yadav authored
      This PR aims to remove alpha and beta attributes from dot operator completely.
      
      Previously dot operator was defined as C = alpha * A . B + beta * C where * is scalar multiplication and . is dot product or matrix multiplication depending on dimension of the inputs.
      
      Aim is to have the definition of dot operator as C = A . B without having alpha or beta.
      
      In order to achieve the same effect as alpha and beta (1) it multiplies the one of the inputs to the dot operator with alpha value. (2) if beta is present then, multiplies the C with beta and then adds into the output from step 1.
      9e43cb8b