"doc/vscode:/vscode.git/clone" did not exist on "13382f88abd20238330f8cb2a473fff4abca3a89"
- 13 Mar, 2024 1 commit
-
-
Ruff authored
-
- 06 Mar, 2024 1 commit
-
-
Aarni Koskela authored
-
- 05 Mar, 2024 1 commit
-
-
rdyro authored
-
- 21 Feb, 2024 1 commit
-
-
Marc Sun authored
* fix deepcopy and copy * add tests * remove line * ruff fix * ruff * Update tests/test_linear4bit.py Co-authored-by:
Aarni Koskela <akx@iki.fi> * add missing state * ruff format * ignore formatting commit for git blame * Params4bit should be initialized as frozen by default * add test for serialization round-tripping * add comparison capability for QuantSate * add back accidentally remove line --------- Co-authored-by:
Aarni Koskela <akx@iki.fi> Co-authored-by:
Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com>
-
- 01 Feb, 2024 1 commit
-
-
Aarni Koskela authored
* test_nvidia_transform: fix variable reference `out_order` is the global parametrization list, not the test fixture argument * Make `parametrize` use more idiomatic * Use a more deterministic helper for `dim*` determination * Convert NO_CUBLASLT errors into skips too * Mark slow and benchmark tests as such (allows `-k "not benchmark"`)
-
- 30 Jan, 2024 1 commit
-
-
Aarni Koskela authored
* Adjust Ruff configuration * do not autofix always * be less strict around tests and benchmarks * adjust ignores for now * Ruff: autofix I and F401 * Apply ruff autofixes * Fix RUF013 complaint * Fix mutable default in replace_linear * Don't use bare except * Wrap bitsandbytes.__main__ entrypoint in function; fix "sensible" typo * Fix ruff B008 (function call in arguments) * Add ruff noqas as suitable * Fix RUF005 (splat instead of concatenating) * Fix B018 (useless expression) * Add pre-commit configuration + GitHub Actions lint workflow * Fix unused `e` in bitsandbytes/__main__.py * fix merge conflict resolution error * run pre-commit hook --------- Co-authored-by:Titus <9048635+Titus-von-Koeller@users.noreply.github.com>
-
- 24 Jan, 2024 1 commit
-
-
Aarni Koskela authored
* implicitly skip any test that implicitly uses CUDA on a non-CUDA box * add a `requires_cuda` fixture
-
- 17 Jan, 2024 1 commit
-
-
Benjamin Warner authored
This PR adds initial FSDP support for training QLoRA models. It enables basic FSDP and CPU Offload support, with low memory training via FSDP.sync_module_states option unsupported. This PR builds off of #840 commit 8278fca and BNB FSDP by @TimDettmers and @Titus-von-Koeller. An example of using this PR to finetune QLoRA models with FSDP can be found in the demo repo: AnswerDotAi/fsdp_qlora. * Minimal changes for fp32 4bit storage from BNB commit 8278fca * Params4bit with selectable storage dtype * possible fix for double quantizing linear weight & quant storage dtype * minor fixes in Params4bit for peft tests * remove redundant * add float16 * update test * Remove float16 quant cast as there are fp32, bf16, & fp16 quant kernels --------- Co-authored-by:Kerem Turgutlu <keremturgutlu@gmail.com>
-
- 10 Nov, 2023 1 commit
-
-
Ruslan Svirschevski authored
-
- 09 Nov, 2023 1 commit
-
-
Ruslan Svirschevski authored
-
- 08 Nov, 2023 1 commit
-
-
Ruslan Svirschevski authored
-
- 02 Nov, 2023 3 commits
-
-
Ruslan Svirschevski authored
-
Ruslan Svirschevski authored
-
Ruslan Svirschevski authored
-