"docs/en/quantization.md" did not exist on "63bd5916136d7233849ccbb811ed274b827f2a91"
  1. 07 Feb, 2022 1 commit
  2. 02 Mar, 2021 1 commit
    • zhanggefan's avatar
      [Fix] fix a bug that may cause compilation failure of dynamic voxelization... · 69047ea2
      zhanggefan authored
      [Fix] fix a bug that may cause compilation failure of dynamic voxelization when using GPUs with compute capability lower than 6.x (#326)
      
      * fix a bug that may cause compilation failure of dynamic voxelization when using gpus with compute capability lower than 6.x
      fix imperfection kernel code that may unintentionally discard valid points when input points count is larger than 50000 * 512 (nearly impossible though).
      
      * Modified scatter_points_cuda.cu to ensure backward compatibility with PyTorch1.5 on CUDA9.0
      
      * fix the issue of DynamicScatter gradient check failure by explicit mark non-floating-point tensor as non-differentiable.
      69047ea2
  3. 25 Feb, 2021 1 commit
    • zhanggefan's avatar
      A faster & more memory-efficient implementation of DynamicScatter (#318) · 93597a53
      zhanggefan authored
      
      
      * a faster & more memory-efficient implementation of DynamicScatter
      
      * fix format issues and add pytest skip code for tests on machines without cuda support
      
      * some trivial changes:
      
      decrease the number of kernel threads per block to 512, to enable inference on GPUs with computing capability lower than 2.0
      
      change the backpropagation behavior of max-reduction. when there are multiple points shares the same maximum feature value, only the first point (with lowest row index) among them is chosen to propagate the output gradient back. before this change, all points with the same maximum feature value can propagate the output gradient back. this change makes the max-reduction behaves in consistence with torch.max. this change may cause gradcheck failure in test_dynamic_scatter.py. please do not worry about it because torch.max fails the gradcheck too.
      
      * fix typo
      Co-authored-by: default avatarzhanggefan <1152009@tongji.edu.cn>
      93597a53
  4. 14 Apr, 2020 1 commit