- 08 Sep, 2021 1 commit
-
-
rusty1s authored
-
- 26 Aug, 2021 2 commits
- 23 Aug, 2021 3 commits
- 18 Aug, 2021 2 commits
-
-
Matthias Fey authored
Let torch determine correct cuda architecture
-
rusty1s authored
-
- 17 Aug, 2021 1 commit
-
-
Romeo Valentin authored
See `pytorch/torch/utils/cpp_extension.cpp:CUDAExtension`: > By default the extension will be compiled to run on all archs of the cards visible during the > building process of the extension, plus PTX. If down the road a new card is installed the > extension may need to be recompiled. If a visible card has a compute capability (CC) that's > newer than the newest version for which your nvcc can build fully-compiled binaries, Pytorch > will make nvcc fall back to building kernels with the newest version of PTX your nvcc does > support (see below for details on PTX). > You can override the default behavior using `TORCH_CUDA_ARCH_LIST` to explicitly specify which > CCs you want the extension to support: > TORCH_CUDA_ARCH_LIST="6.1 8.6" python build_my_extension.py > TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX" python build_my_extension.py > The +PTX option causes extension kernel binaries to include PTX instructions for the specified > CC. PTX is an intermediate representation that allows kernels to runtime-compile for any CC >= > the specified CC (for example, 8.6+PTX generates PTX that can runtime-compile for any GPU with > CC >= 8.6). This improves your binary's forward compatibility. However, relying on older PTX to > provide forward compat by runtime-compiling for newer CCs can modestly reduce performance on > those newer CCs. If you know exact CC(s) of the GPUs you want to target, you're always better > off specifying them individually. For example, if you want your extension to run on 8.0 and 8.6, > "8.0+PTX" would work functionally because it includes PTX that can runtime-compile for 8.6, but > "8.0 8.6" would be better. > Note that while it's possible to include all supported archs, the more archs get included the > slower the building process will be, as it will build a separate kernel image for each arch.
-
- 16 Aug, 2021 1 commit
-
-
rusty1s authored
-
- 15 Aug, 2021 1 commit
-
-
rusty1s authored
-
- 10 Aug, 2021 3 commits
- 29 Jul, 2021 7 commits
- 28 Jul, 2021 2 commits
- 22 Jul, 2021 1 commit
-
-
rusty1s authored
-
- 21 Jul, 2021 1 commit
-
-
rusty1s authored
-
- 20 Jul, 2021 2 commits
- 14 Jul, 2021 5 commits
- 13 Jul, 2021 3 commits
- 12 Jul, 2021 1 commit
-
-
Matthias Fey authored
[WIP] Implement HGSampling algorithm from the Heterogeneous Graph Transformer paper
-
- 03 Jul, 2021 1 commit
-
-
rusty1s authored
-
- 02 Jul, 2021 2 commits
- 27 Jun, 2021 1 commit
-
-
Chantat Eksombatchai authored
-