- 02 May, 2025 1 commit
-
-
Matthew Douglas authored
* Add aarch64 cpu tests and CUDA build to nightly workflow * aarch64: limit CUDA targets to sm75, sm80, sm90, sm100 * aarch64: limit CUDA targets to sm75, sm80, sm90, sm100 * Update build cpu script * fix * Update auditwheel for aarch64
-
- 29 Apr, 2025 1 commit
-
-
Matthew Douglas authored
* Run unit tests on GH Actions * fix * fix * trigger workflow * Update * Update * Update * Run tests nightly * Disable paged optimizer test on Windows * Skip unit tests on Windows for CUDA 12.x (driver on runner is too old)
-
- 22 Apr, 2025 1 commit
-
-
Matthew Douglas authored
* Stop building for CUDA toolkit < 11.8 * Simplify * Drop sm70 from cu128 build targets to align with pytorch
-
- 24 Feb, 2025 2 commits
-
-
Matthew Douglas authored
-
Matthew Douglas authored
-
- 23 Jan, 2025 1 commit
-
-
Matthew Douglas authored
-
- 22 Jan, 2025 1 commit
-
-
Johnny authored
* initial support blackwell * Update CHANGELOG.md Co-authored-by:
Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com> * Update CMakeLists.txt * Update CHANGELOG.md * fix build-cuda.sh * fix build-cuda.sh * fix cuda 12.7 build-cuda.sh * Update build-cuda.sh * Update cuda from 12.6.2 to 12.6.3 * Update .github/workflows/python-package.yml Co-authored-by:
Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com> * Update install_cuda.py Co-authored-by:
Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com> * Update install_cuda.sh Co-authored-by:
Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com> * Update .github/scripts/build-cuda.sh * Update install_cuda.sh --------- Co-authored-by:
Matthew Douglas <38992547+matthewdouglas@users.noreply.github.com>
-
- 05 Dec, 2024 1 commit
-
-
Matthew Douglas authored
* Start of int8 refactor: remove col32/col_ampere/col_turing transforms in new igemmlt implementation * Fix unintended change * New naive mm_dequant kernel for row-major; cleanup * fix * int8 refactor: initial sparse decomp, cleanup * Int8 refactoring: remove separate NO_CUBLASLT build; more cleanup * int8: inference optimizations, some cleanup * int8: more tests passing, cleanup * int8 - more cleanup, most tests passing * int8: specify CUDA stream for int8 ops * perf: reduce overhead from getting cudaStream ptr * Mark some functions for deprecation. * int8 sparse decomp: small perf improvement * update setup.py * Update bitsandbytes/autograd/_functions.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * Update bitsandbytes/functional.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * Update bitsandbytes/functional.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * Update bitsandbytes/research/autograd/_functions.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * int8 - perf improvement for sparse decomposition inference; deprecate get_tensor_stream() in favor of new private fn * int8 cleanup * Ignore ruff rule ISC001 (incompatible with formatter) * add comment * int8 more cleanup * Update bitsandbytes/functional.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * int8: rename / deprecate old fn signatures * Update bitsandbytes/functional.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * type annotation * format update * Update bitsandbytes/research/autograd/_functions.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * cleanup * Add comment to explain division optimization * more cleanup * Update bitsandbytes/functional.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * Update bitsandbytes/functional.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * Update bitsandbytes/functional.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * cleanup * Type annotations, cleanup * remove unused kernels; improved type annotations * small perf optimization for single-GPU systems * small perf optimization for single-GPU systems * update docstrings * Improve docs and tests * Update docstring * Update test * add benchmarking script * test cleanup: add deprecated marker, move benchmarks out * Add int8 dequant function; misc improvements * int8 matmul fallback for inner dims not divisible by 4 * improve register usage of kInt8VectorQuant - especially for A100/H100 * disable fail-fast for package build * maxwell compat * ptxas verbose * docs update * doc update * backward fix * Bugfix sparse decomp * Int8 fix for PEFT OLoRA init * Fix test for deprecated spmm_coo * test improvement * doc update * typo * doc cleanup * docs * add inference benchmark script * Add benchmarks, doc update --------- Co-authored-by:
Aarni Koskela <akx@iki.fi>
-
- 11 Mar, 2024 1 commit
-
-
Aarni Koskela authored
-