1. 08 Jun, 2022 1 commit
    • Nicolas Hug's avatar
      [FBcode->GH] [quant][core][better-engineering] Rename files in quantized directory… (#6133) · a7e4fbdc
      Nicolas Hug authored
      * [quant][core][better-engineering] Rename files in quantized directory to conform with non-quantized countertpart filenames (#77037)
      
      Summary:
      X-link: https://github.com/pytorch/pytorch/pull/77037
      
      
      
      Names of analogous files in quantized directory (previously snake case) were inconsistent with
      their non-quantized filename counterparts (pascal case). This is the first of a series of PRs that changes
      all files in quantized (and sub-directories) dir to have pascal case.
      
      `aten/src/ATen/native/quantized/qconv_unpack.cpp` has not been renamed yet
      because (for reasons currently unknown) after making the name change, `import torch` produces the below error (`qlinear_unpack.cpp` renaming also seems to fail some phabricator CI tests for similar reasons). We suspect that these may be undefined errors and will revisit naming these files in a future PR.
      
      ```
      terminate called after throwing an instance of 'c10::Error'
        what():  Type c10::intrusive_ptr<ConvPackedParamsBase<2> > could not be converted to any of the known types.
      Exception raised from operator() at ../aten/src/ATen/core/jit_type.h:1735 (most recent call first):
      frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x55 (0x7f26745c0c65 in /data/users/dzdang/pytorch/torch/lib/libc10.so)
      frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xb1 (0x7f26745bdcd1 in /data/users/dzdang/pytorch/torch/lib/libc10.so)
      frame #2: <unknown function> + 0x1494e24 (0x7f2663b14e24 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
      frame #3: <unknown function> + 0xfed0bc (0x7f266366d0bc in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
      frame #4: c10::detail::infer_schema::make_function_schema(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x5a (0x7f266366d71a in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
      frame #5: c10::detail::infer_schema::make_function_schema(c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>, c10::ArrayRef<c10::detail::infer_schema::ArgumentDef>) + 0x7b (0x7f266366e06b in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
      frame #6: <unknown function> + 0x1493f32 (0x7f2663b13f32 in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
      frame #7: <unknown function> + 0xe227dd (0x7f26634a27dd in /data/users/dzdang/pytorch/torch/lib/libtorch_cpu.so)
      frame #8: <unknown function> + 0x14e0a (0x7f268c934e0a in /lib64/ld-linux-x86-64.so.2)
      ..........................truncated.............
      ```
      
      Reviewed By: malfet
      
      Differential Revision: D36862332
      
      Pulled By: dzdang
      
      fbshipit-source-id: 598c36656b4e71f906d940e7ff19ecf82d43031d
      
      * empty commit
      
      * empty commit
      
      * empty commit
      Co-authored-by: default avatardzdang <dzdang@umich.edu>
      Co-authored-by: default avatarVasilis Vryniotis <datumbox@users.noreply.github.com>
      a7e4fbdc
  2. 22 Nov, 2021 1 commit
  3. 26 Aug, 2021 2 commits
  4. 24 May, 2021 1 commit
  5. 13 Apr, 2021 1 commit
  6. 08 Apr, 2021 1 commit
    • Nicolas Hug's avatar
      Add Quantized version of RoIAlign (#3624) · ad9cc62a
      Nicolas Hug authored
      * WIP
      
      * clang
      
      * docs
      
      * extracted out common utils
      
      * Use better quantization function and pass tensors as parameters
      
      * proper dequantization
      
      * Some tests
      
      * Dequantization optimization, seems to gain a few ms
      
      * clang-format
      
      * again
      
      * more correct test. Had to remove optimization although it almost works
      
      * Also test aligned=True
      
      * remove useless part
      
      * more docs and comments
      
      * Put back optimization with more robust test
      
      * Added check for index upper bound
      
      * avoid possible overflow
      
      * Move common function into common.h
      
      * oops
      
      * scale=1,zero_point=0 makes more sense
      
      * Force batch size of 1 to prevent any indexingbug
      
      * format
      
      * format again
      
      * updated docstring
      
      * put back description comment for pre_calc_bilinear_interpolate
      
      * revert most changes to docstring as it's taken care of in another PR
      ad9cc62a
  7. 30 Mar, 2021 1 commit
    • Nicolas Hug's avatar
      Add quantized version of nms (#3601) · f74bfab6
      Nicolas Hug authored
      * Add quantized version of nms
      
      * Added tests
      
      * Compute areas only once
      
      * remove calls to dequantize_val
      
      * fix return type for empty tensor
      
      * flake8
      
      * remove use of scale as it gets cancelled out
      
      * simpler int convertion in tests
      
      * explicitly set ovr to double
      
      * add tests for more values of scale and zero_point
      
      * comment about underflow
      
      * remove unnecessary accessor
      
      * properly convert to float for division
      
      * Add comments about underflow
      
      * explicitely cast coordinates to float to allow vectorization
      
      * clang
      
      * clang  again
      
      * hopefully OK now
      f74bfab6