1. 15 Sep, 2021 2 commits
  2. 13 Sep, 2021 1 commit
  3. 10 Sep, 2021 1 commit
  4. 09 Sep, 2021 10 commits
  5. 08 Sep, 2021 4 commits
  6. 07 Sep, 2021 5 commits
  7. 24 Aug, 2021 1 commit
  8. 18 Aug, 2021 2 commits
    • rusty1s's avatar
      add TORCH_CUDA_ARCH_LIST to building wheels · 504da5eb
      rusty1s authored
      504da5eb
    • RomeoV's avatar
      Let torch determine correct cuda architecture (#231) · 76d9d051
      RomeoV authored
      
      
      * Let torch determine correct cuda architecture
      
      See `pytorch/torch/utils/cpp_extension.cpp:CUDAExtension`:
      >   By default the extension will be compiled to run on all archs of the cards visible during the
      >   building process of the extension, plus PTX. If down the road a new card is installed the
      >   extension may need to be recompiled. If a visible card has a compute capability (CC) that's
      >   newer than the newest version for which your nvcc can build fully-compiled binaries, Pytorch
      >   will make nvcc fall back to building kernels with the newest version of PTX your nvcc does
      >   support (see below for details on PTX).
      
      >   You can override the default behavior using `TORCH_CUDA_ARCH_LIST` to explicitly specify which
      >   CCs you want the extension to support:
      
      >   TORCH_CUDA_ARCH_LIST="6.1 8.6" python build_my_extension.py
      >   TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX" python build_my_extension.py
      
      >   The +PTX option causes extension kernel binaries to include PTX instructions for the specified
      >   CC. PTX is an intermediate representation that allows kernels to runtime-compile for any CC >=
      >   the specified CC (for example, 8.6+PTX generates PTX that can runtime-compile for any GPU with
      >   CC >= 8.6). This improves your binary's forward compatibility. However, relying on older PTX to
      >   provide forward compat by runtime-compiling for newer CCs can modestly reduce performance on
      >   those newer CCs. If you know exact CC(s) of the GPUs you want to target, you're always better
      >   off specifying them individually. For example, if you want your extension to run on 8.0 and 8.6,
      >   "8.0+PTX" would work functionally because it includes PTX that can runtime-compile for 8.6, but
      >   "8.0 8.6" would be better.
      
      >   Note that while it's possible to include all supported archs, the more archs get included the
      >   slower the building process will be, as it will build a separate kernel image for each arch.
      
      * add TORCH_CUDA_ARCH_LIST to building wheels
      Co-authored-by: default avatarrusty1s <matthias.fey@tu-dortmund.de>
      76d9d051
  9. 11 Aug, 2021 1 commit
  10. 05 Aug, 2021 1 commit
  11. 29 Jul, 2021 9 commits
  12. 28 Jul, 2021 3 commits