1. 01 Feb, 2024 1 commit
  2. 31 Jan, 2024 1 commit
  3. 30 Jan, 2024 2 commits
    • Aarni Koskela's avatar
      29a637bc
    • Aarni Koskela's avatar
      Ruff fixes (#984) · 706ec24d
      Aarni Koskela authored
      
      
      * Adjust Ruff configuration
      
      * do not autofix always
      * be less strict around tests and benchmarks
      * adjust ignores for now
      
      * Ruff: autofix I and F401
      
      * Apply ruff autofixes
      
      * Fix RUF013 complaint
      
      * Fix mutable default in replace_linear
      
      * Don't use bare except
      
      * Wrap bitsandbytes.__main__ entrypoint in function; fix "sensible" typo
      
      * Fix ruff B008 (function call in arguments)
      
      * Add ruff noqas as suitable
      
      * Fix RUF005 (splat instead of concatenating)
      
      * Fix B018 (useless expression)
      
      * Add pre-commit configuration + GitHub Actions lint workflow
      
      * Fix unused `e` in bitsandbytes/__main__.py
      
      * fix merge conflict resolution error
      
      * run pre-commit hook
      
      ---------
      Co-authored-by: default avatarTitus <9048635+Titus-von-Koeller@users.noreply.github.com>
      706ec24d
  4. 29 Jan, 2024 4 commits
  5. 28 Jan, 2024 1 commit
  6. 26 Jan, 2024 1 commit
  7. 25 Jan, 2024 1 commit
    • Miles Cranmer's avatar
      Fix `max_memory` example on README (#944) · 94c7f2c5
      Miles Cranmer authored
      * Fix `max_memory` example on README
      
      - The new `max_memory` syntax expects a dictionary
      - This change also accounts for multiple devices
      
      * Fix model name in `from_pretrained` on README
      94c7f2c5
  8. 24 Jan, 2024 2 commits
  9. 23 Jan, 2024 4 commits
  10. 17 Jan, 2024 1 commit
    • Benjamin Warner's avatar
      Initial FSDP Support for QLoRA Finetuning (#970) · dcfb6f81
      Benjamin Warner authored
      
      
      This PR adds initial FSDP support for training QLoRA models. It enables basic FSDP and CPU Offload support, with low memory training via FSDP.sync_module_states option unsupported.
      
      This PR builds off of #840 commit 8278fca and BNB FSDP by @TimDettmers and @Titus-von-Koeller.
      
      An example of using this PR to finetune QLoRA models with FSDP can be found in the demo repo: AnswerDotAi/fsdp_qlora.
      
      * Minimal changes for fp32 4bit storage from BNB commit 8278fca
      
      * Params4bit with selectable storage dtype
      
      * possible fix for double quantizing linear weight & quant storage dtype
      
      * minor fixes in Params4bit for peft tests
      
      * remove redundant
      
      * add float16
      
      * update test
      
      * Remove float16 quant cast as there are fp32, bf16, & fp16 quant kernels
      
      ---------
      Co-authored-by: default avatarKerem Turgutlu <keremturgutlu@gmail.com>
      dcfb6f81
  11. 15 Jan, 2024 1 commit
  12. 12 Jan, 2024 3 commits
  13. 08 Jan, 2024 2 commits
  14. 07 Jan, 2024 1 commit
  15. 02 Jan, 2024 5 commits
  16. 01 Jan, 2024 10 commits