"tests/vscode:/vscode.git/clone" did not exist on "d1fab39e59a98ddce2490b0ef85633bb33cb0301"
  1. 30 Jun, 2022 1 commit
  2. 22 Apr, 2022 1 commit
  3. 11 Mar, 2022 1 commit
  4. 03 Feb, 2022 1 commit
    • Amit Aflalo's avatar
      mac arm64(m1) (#200) · d987d295
      Amit Aflalo authored
      * mac arm64(m1)
      
      * linting
      
      * # Compile for mac arm64
      
      * # Compile for mac arm64
      
      * removing .DS_Store
      d987d295
  5. 01 Feb, 2022 1 commit
    • Daniel Falbel's avatar
      Export symbols (#198) · 3bf43eb0
      Daniel Falbel authored
      * Mark exported symbols with `SPARSE_API` so they are available in the DLL on WIndows.
      
      * Export symbols by default in the Python library.
      
      * Sync headers with implementation.
      
      * Include the `SPARSE_API` macro.
      
      * Fix linting issue.
      3bf43eb0
  6. 30 Jan, 2022 1 commit
    • Daniel Falbel's avatar
      Add options to conditionally include Python (#196) · 8cc819d5
      Daniel Falbel authored
      * Add `WITH_PYTHON` to conditionally link to Python.
      
      * Only include `Python.h` when WITH_PYTHON is set.
      
      * Avoid including extensions.h as it includes Python.h.
      
      * Better way to include `getpid()`.
      
      * Define `WITH_PYTHON` when building with setup.py.
      
      * Only include Pyinit when building with Python.
      
      * Only include Pyinit when building with Python.
      8cc819d5
  7. 26 Jan, 2022 1 commit
    • Nick Stathas's avatar
      Skip unnecessary assertions and enable non-blocking data transfers (#195) · fe8c3ce3
      Nick Stathas authored
      * Uses the `trust_data` invariant to skip blocking assertions, when unnecessary, during construction of `SparseStorage` objects.
      * Refactors the dtype and device transfer APIs to align with `torch.Tensor` while maintaining backward compatibility.
      * No longer constructs dummy tensors when changing dtype or device.
      fe8c3ce3
  8. 13 Nov, 2021 1 commit
  9. 08 Sep, 2021 1 commit
  10. 17 Aug, 2021 1 commit
    • Romeo Valentin's avatar
      Let torch determine correct cuda architecture · 407f53e2
      Romeo Valentin authored
      See `pytorch/torch/utils/cpp_extension.cpp:CUDAExtension`:
      >   By default the extension will be compiled to run on all archs of the cards visible during the
      >   building process of the extension, plus PTX. If down the road a new card is installed the
      >   extension may need to be recompiled. If a visible card has a compute capability (CC) that's
      >   newer than the newest version for which your nvcc can build fully-compiled binaries, Pytorch
      >   will make nvcc fall back to building kernels with the newest version of PTX your nvcc does
      >   support (see below for details on PTX).
      
      >   You can override the default behavior using `TORCH_CUDA_ARCH_LIST` to explicitly specify which
      >   CCs you want the extension to support:
      
      >   TORCH_CUDA_ARCH_LIST="6.1 8.6" python build_my_extension.py
      >   TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX" python build_my_extension.py
      
      >   The +PTX option causes extension kernel binaries to include PTX instructions for the specified
      >   CC. PTX is an intermediate representation that allows kernels to runtime-compile for any CC >=
      >   the specified CC (for example, 8.6+PTX generates PTX that can runtime-compile for any GPU with
      >   CC >= 8.6). This improves your binary's forward compatibility. However, relying on older PTX to
      >   provide forward compat by runtime-compiling for newer CCs can modestly reduce performance on
      >   those newer CCs. If you know exact CC(s) of the GPUs you want to target, you're always better
      >   off specifying them individually. For example, if you want your extension to run on 8.0 and 8.6,
      >   "8.0+PTX" would work functionally because it includes PTX that can runtime-compile for 8.6, but
      >   "8.0 8.6" would be better.
      
      >   Note that while it's possible to include all supported archs, the more archs get included the
      >   slower the building process will be, as it will build a separate kernel image for each arch.
      407f53e2
  11. 28 Jul, 2021 1 commit
  12. 26 Jun, 2021 7 commits
  13. 25 Jun, 2021 1 commit
  14. 17 Jun, 2021 1 commit
  15. 12 May, 2021 1 commit
  16. 05 Mar, 2021 2 commits
  17. 03 Mar, 2021 1 commit
  18. 12 Feb, 2021 1 commit
  19. 12 Nov, 2020 1 commit
  20. 05 Nov, 2020 1 commit
  21. 02 Nov, 2020 1 commit
  22. 28 Jul, 2020 1 commit
  23. 01 Jul, 2020 1 commit
  24. 17 Jun, 2020 1 commit
  25. 15 Jun, 2020 1 commit
  26. 22 May, 2020 1 commit
  27. 11 May, 2020 1 commit
  28. 23 Apr, 2020 1 commit
  29. 12 Apr, 2020 2 commits
  30. 06 Apr, 2020 1 commit
  31. 21 Mar, 2020 1 commit
  32. 24 Feb, 2020 1 commit