- 25 Jan, 2024 1 commit
-
-
Miles Cranmer authored
* Fix `max_memory` example on README - The new `max_memory` syntax expects a dictionary - This change also accounts for multiple devices * Fix model name in `from_pretrained` on README
-
- 24 Jan, 2024 2 commits
-
-
Aarni Koskela authored
* implicitly skip any test that implicitly uses CUDA on a non-CUDA box * add a `requires_cuda` fixture
-
Animesh Kumar authored
-
- 23 Jan, 2024 4 commits
-
-
Charles Coulombe authored
* Added install requirements to setup * Update setup.py Co-authored-by:
Aarni Koskela <akx@iki.fi> --------- Co-authored-by:
Aarni Koskela <akx@iki.fi>
-
Kaiser Pister authored
Remove redundant key
-
Korytov Pavel authored
-
Zoey authored
Bump CUDA 12.2.0 to 12.2.1, fix setup support for Cuda 12.1 (#703), Sort compute capabilities sets to select max * Add support for CUDA 12.1 * Update README to include CUDA 12.1 version * Add support for >= 12x Co-authored-by:
Jeongseok Kang <jskang@lablup.com> * Temporary version of bitsandbytes PR 527: Sort compute capabilities sets to select max * Modify PR 506 to support C++20 * Add Cuda 12.2 --------- Co-authored-by:
PriNova <info@prinova.de> Co-authored-by:
PriNova <31413214+PriNova@users.noreply.github.com> Co-authored-by:
Jeongseok Kang <jskang@lablup.com>
-
- 17 Jan, 2024 1 commit
-
-
Benjamin Warner authored
This PR adds initial FSDP support for training QLoRA models. It enables basic FSDP and CPU Offload support, with low memory training via FSDP.sync_module_states option unsupported. This PR builds off of #840 commit 8278fca and BNB FSDP by @TimDettmers and @Titus-von-Koeller. An example of using this PR to finetune QLoRA models with FSDP can be found in the demo repo: AnswerDotAi/fsdp_qlora. * Minimal changes for fp32 4bit storage from BNB commit 8278fca * Params4bit with selectable storage dtype * possible fix for double quantizing linear weight & quant storage dtype * minor fixes in Params4bit for peft tests * remove redundant * add float16 * update test * Remove float16 quant cast as there are fp32, bf16, & fp16 quant kernels --------- Co-authored-by:Kerem Turgutlu <keremturgutlu@gmail.com>
-
- 15 Jan, 2024 1 commit
-
-
Titus von Koeller authored
-
- 12 Jan, 2024 3 commits
-
-
Younes Belkada authored
Delete .github/workflows/delete_doc_commment.yml
-
Younes Belkada authored
-
Titus authored
-
- 08 Jan, 2024 2 commits
-
-
Ian Butler authored
-
Tim Dettmers authored
-
- 07 Jan, 2024 1 commit
-
-
Tim Dettmers authored
fix array index out of bounds in kgetColRowStats
-
- 02 Jan, 2024 5 commits
-
-
Tim Dettmers authored
Fix typo "quanitze"
-
Tim Dettmers authored
Added scipy to requirements.txt
-
Tim Dettmers authored
Make sure bitsandbytes handles permission errors in the right order
-
Tim Dettmers authored
Add version attribute as per Python convention
-
Tim Dettmers authored
Fix parameter name in error message
-
- 01 Jan, 2024 13 commits
-
-
Tim Dettmers authored
Update README.md
-
Tim Dettmers authored
Update README.md
-
Tim Dettmers authored
Fixed missing `Embedding` export
-
Tim Dettmers authored
Fix typo in test_optim.py
-
Tim Dettmers authored
improve `make clean` target
-
Tim Dettmers authored
doc: Fix typo in how_to_use_nonpytorch_cuda.md
-
Tim Dettmers authored
-
Tim Dettmers authored
Robustness fix: don't break in case of directories without read permission
-
Tim Dettmers authored
Add env var related to google systems to ignored list
-
Tim Dettmers authored
Small fix to getting started code in README
-
Tim Dettmers authored
Fixed wget link for installing cuda
-
Tim Dettmers authored
-
Tim Dettmers authored
FIX missing closing quote in README example
-
- 19 Dec, 2023 3 commits
-
-
Younes Belkada authored
-
Younes Belkada authored
-
Younes Belkada authored
-
- 11 Dec, 2023 4 commits
-
-
Sebastian Raschka authored
-
Sebastian Raschka authored
-
Titus von Koeller authored
-
Titus von Koeller authored
-