"vscode:/vscode.git/clone" did not exist on "40e7698a3aac4079033937f4b385eba32fc97065"
  1. 14 Sep, 2023 2 commits
    • Paul Fultz II's avatar
    • Brian Pickrell's avatar
      added rand_uniform operation closes #1958 (#2051) · fbd12bd3
      Brian Pickrell authored
      New op that populates a shape with random numbers with a uniform distribution. The rand_uniform op. can implement the Onnx RandomUniform instruction, and can also create the random number sequence necessary to implement Multinomial. (At this time, our Onnx Multinomial parsing generates a random sequence of numbers when parsing as a workaround, so that the resulting program uses the same "random" set every time.)
      
      Arguments: shape, seed. Shape is required; can be static or dynamic. Seed is still optional in this version. If it's not given at inference time, use the value in the creation attribute seed. Update: deleted A boolean use_auto_seed causes any given seed to be ignored.
      fbd12bd3
  2. 13 Sep, 2023 1 commit
  3. 12 Sep, 2023 1 commit
  4. 11 Sep, 2023 1 commit
  5. 10 Sep, 2023 2 commits
    • tvukovic-amd's avatar
      Fixed test float equal for Windows (#2153) · 37787e0b
      tvukovic-amd authored
      Added case to skip test_limits<int,long> while running tests on Windows since int and long have same min and max values
      37787e0b
    • Charlie Lin's avatar
      Dynamic allocate (#2079) · ede8bfa6
      Charlie Lin authored
      Makes a version of allocate that takes in dimensions and allocates a buffer
      Going to create a simplify_dynamic_ops compiler pass that will use the use_shape_attr flag
      The ONNX op ConstantOfShape needs the buffer to be filled with a specific value, so going to make another PR for that and a fill operator
      ede8bfa6
  6. 08 Sep, 2023 1 commit
  7. 07 Sep, 2023 1 commit
  8. 06 Sep, 2023 2 commits
  9. 29 Aug, 2023 1 commit
    • Brian Pickrell's avatar
      Fix dyn pooling (#1768) · 7b8a28f5
      Brian Pickrell authored
      Adds support for dynamic input shape in pooling operator along with auto-padding. This combination requires that the padding (and therefore the output shape) can't be computed until runtime.
      7b8a28f5
  10. 22 Aug, 2023 2 commits
  11. 21 Aug, 2023 1 commit
  12. 18 Aug, 2023 2 commits
  13. 13 Aug, 2023 1 commit
  14. 11 Aug, 2023 1 commit
  15. 10 Aug, 2023 1 commit
  16. 08 Aug, 2023 3 commits
  17. 06 Aug, 2023 2 commits
  18. 03 Aug, 2023 1 commit
  19. 31 Jul, 2023 2 commits
    • Lakhinder Walia's avatar
      Lw/fix half shape (#2000) · e4dc75ea
      Lakhinder Walia authored
      * Use shape of Instruction (instead of a default) in add_return()
      
      * Instruction validation fix: not to use a default shape value for comparison
      
      * Fix instruction::replace() to recompute shape for "@return"
      
      * handle the case of missing shape in an Instruction related Test
      
      * use compute_shape() to get op shapes + test case for tuple_type
      
      * add test case shape_test/return_shape_tuple
      
      * Add test for @return to check for half type
      
      * Move @return unit-tests around..; Address review comments
      
      * Broken comparison fix: comparison to a (default) shape of tuple_type
      
      * Test cases: (add) return_shape_empty & (modify) return_shape_tuple
      
      * modify the assert() statement
      e4dc75ea
    • Artur Wojcik's avatar
  20. 30 Jul, 2023 2 commits
  21. 28 Jul, 2023 2 commits
  22. 27 Jul, 2023 1 commit
  23. 26 Jul, 2023 1 commit
  24. 25 Jul, 2023 1 commit
  25. 23 Jul, 2023 1 commit
  26. 22 Jul, 2023 2 commits
  27. 21 Jul, 2023 2 commits
    • Umang Yadav's avatar
      Add back clamping and add tests (#1969) · 6957243c
      Umang Yadav authored
      Fixes #1957
      
      Clamping was removed in #1853.
      
      Turns out clamping as necessary to handle overflow/underflow cases. during downcasting, if it overflowed then without clamping it returned infinity.
      6957243c
    • Umang Yadav's avatar
      Use `optimize_module` pass for the quantization to fp16 (#1974) · 6f1f4b59
      Umang Yadav authored
      Fixes #1746
      
      BatchNorm only has x as the runtime input parameter for the following equation. All the other parameters are compile-time constants and related operations can be const-folded before quantizing to fp16 to preserve precision.
      6f1f4b59