- 07 Feb, 2022 1 commit
-
-
ChaimZhu authored
* add precommmit to check readme * update * fix typos
-
- 13 Oct, 2021 1 commit
-
-
Tai-Wang authored
-
- 17 Sep, 2021 1 commit
-
-
zhanggefan authored
-
- 07 Sep, 2021 1 commit
-
-
zhanggefan authored
-
- 05 Aug, 2021 1 commit
-
-
WRH authored
* use type long long and dynamic memory allocation * use int64_t instead of long long
-
- 14 Apr, 2021 1 commit
-
-
gillbam authored
-
- 07 Apr, 2021 1 commit
-
-
zhanggefan authored
- fix 'invalid configuration argument' error triggered by empty point input. test cases covering similar situations are added to test_dynamic_scatter.py as well. trivial changes: - switch to using torch::unique_dim to generate reduce mapping instead of calculating it from scratch.
-
- 04 Apr, 2021 1 commit
-
-
Wenwei Zhang authored
* fix pt1.8 issues * fix pt1.8 issues
-
- 29 Mar, 2021 1 commit
-
-
Wenwei Zhang authored
* fix compilation error in pytorch 1.7 * add pt1.7 build * Update build.yml
-
- 02 Mar, 2021 1 commit
-
-
zhanggefan authored
[Fix] fix a bug that may cause compilation failure of dynamic voxelization when using GPUs with compute capability lower than 6.x (#326) * fix a bug that may cause compilation failure of dynamic voxelization when using gpus with compute capability lower than 6.x fix imperfection kernel code that may unintentionally discard valid points when input points count is larger than 50000 * 512 (nearly impossible though). * Modified scatter_points_cuda.cu to ensure backward compatibility with PyTorch1.5 on CUDA9.0 * fix the issue of DynamicScatter gradient check failure by explicit mark non-floating-point tensor as non-differentiable.
-
- 25 Feb, 2021 1 commit
-
-
zhanggefan authored
* a faster & more memory-efficient implementation of DynamicScatter * fix format issues and add pytest skip code for tests on machines without cuda support * some trivial changes: decrease the number of kernel threads per block to 512, to enable inference on GPUs with computing capability lower than 2.0 change the backpropagation behavior of max-reduction. when there are multiple points shares the same maximum feature value, only the first point (with lowest row index) among them is chosen to propagate the output gradient back. before this change, all points with the same maximum feature value can propagate the output gradient back. this change makes the max-reduction behaves in consistence with torch.max. this change may cause gradcheck failure in test_dynamic_scatter.py. please do not worry about it because torch.max fails the gradcheck too. * fix typo Co-authored-by:zhanggefan <1152009@tongji.edu.cn>
-
- 22 Jan, 2021 1 commit
-
-
xiliu8006 authored
* Fix bug when num_features != 4 * add voxelization unittest * fixed CI without GPU * use the numpy version to test the CUDA version Co-authored-by:Guanghui Ren(任广辉) <sundrops.ren@gmail.com>
-
- 27 Apr, 2020 1 commit
-
-
zhangwenwei authored
-
- 14 Apr, 2020 1 commit
-
-
zhangwenwei authored
-