Remove CPU overheads of torch.cuda.get_device_properties() by caching it (#1722)
* build pybind of sm_arch in TE-Pytorch Signed-off-by:Xiaowei Ren <xren@nvidia.com> * check sm_arch for batch_p2p_comm in CP+P2P Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix device compute capability of pytorch tests Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * bug fix Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Revert "fix device compute capability of pytorch tests" This reverts commit 85886eb35dcf57a37ddc98a13d283f7a6d8f8e32. * revert changes and resolve conflict Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * Revert "bug fix" This reverts commit dd75c64c62e882ee5e3b54591b86f89c349ad3b0. * manually revert changes Signed-off-by:
Xiaowei Ren <xren@nvidia.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * cache torch.cuda.get_device_properties Signed-off-by:
Xiaowei Ren <xren@nvidia.com> --------- Signed-off-by:
Xiaowei Ren <xren@nvidia.com> Co-authored-by:
pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Showing
Please register or sign in to comment