1. 20 Sep, 2024 2 commits
  2. 14 Aug, 2024 1 commit
  3. 06 Aug, 2024 1 commit
  4. 15 Jul, 2024 1 commit
  5. 30 May, 2024 1 commit
    • Benjamin Bossan's avatar
      FIX Make Int8Params deepcopy-able · ed99b3c1
      Benjamin Bossan authored
      This requires to implement the __deepcopy__ method in Int8Params.
      Moreover, there was an issue in the Linear8BitLT constructor that would
      assign instance attributes to the class, which is now fixed.
      
      Please review carefully that this does not impact existing code.
      
      Tests that I ran:
      
      - pytest tests/test_linear8bitlt.py
      - in PEFT: python -m pytest -m "single_gpu_tests and bitsandbytes" tests/test_gpu_examples.py
      - in PEFT: python -m pytest -m "single_gpu_tests and bitsandbytes" tests/test_common_gpu.py
      - in transformers: RUN_SLOW=1 python -m pytest tests/quantization/bnb -x
      ed99b3c1
  6. 29 May, 2024 1 commit
  7. 02 Apr, 2024 1 commit
  8. 29 Mar, 2024 1 commit
  9. 13 Mar, 2024 2 commits
  10. 11 Mar, 2024 2 commits
  11. 06 Mar, 2024 1 commit
  12. 05 Mar, 2024 1 commit
  13. 21 Feb, 2024 3 commits
  14. 05 Feb, 2024 1 commit
  15. 01 Feb, 2024 3 commits
  16. 30 Jan, 2024 1 commit
    • Aarni Koskela's avatar
      Ruff fixes (#984) · 706ec24d
      Aarni Koskela authored
      
      
      * Adjust Ruff configuration
      
      * do not autofix always
      * be less strict around tests and benchmarks
      * adjust ignores for now
      
      * Ruff: autofix I and F401
      
      * Apply ruff autofixes
      
      * Fix RUF013 complaint
      
      * Fix mutable default in replace_linear
      
      * Don't use bare except
      
      * Wrap bitsandbytes.__main__ entrypoint in function; fix "sensible" typo
      
      * Fix ruff B008 (function call in arguments)
      
      * Add ruff noqas as suitable
      
      * Fix RUF005 (splat instead of concatenating)
      
      * Fix B018 (useless expression)
      
      * Add pre-commit configuration + GitHub Actions lint workflow
      
      * Fix unused `e` in bitsandbytes/__main__.py
      
      * fix merge conflict resolution error
      
      * run pre-commit hook
      
      ---------
      Co-authored-by: default avatarTitus <9048635+Titus-von-Koeller@users.noreply.github.com>
      706ec24d
  17. 24 Jan, 2024 1 commit
  18. 17 Jan, 2024 1 commit
    • Benjamin Warner's avatar
      Initial FSDP Support for QLoRA Finetuning (#970) · dcfb6f81
      Benjamin Warner authored
      
      
      This PR adds initial FSDP support for training QLoRA models. It enables basic FSDP and CPU Offload support, with low memory training via FSDP.sync_module_states option unsupported.
      
      This PR builds off of #840 commit 8278fca and BNB FSDP by @TimDettmers and @Titus-von-Koeller.
      
      An example of using this PR to finetune QLoRA models with FSDP can be found in the demo repo: AnswerDotAi/fsdp_qlora.
      
      * Minimal changes for fp32 4bit storage from BNB commit 8278fca
      
      * Params4bit with selectable storage dtype
      
      * possible fix for double quantizing linear weight & quant storage dtype
      
      * minor fixes in Params4bit for peft tests
      
      * remove redundant
      
      * add float16
      
      * update test
      
      * Remove float16 quant cast as there are fp32, bf16, & fp16 quant kernels
      
      ---------
      Co-authored-by: default avatarKerem Turgutlu <keremturgutlu@gmail.com>
      dcfb6f81
  19. 08 Jan, 2024 1 commit
  20. 03 Dec, 2023 1 commit
  21. 10 Nov, 2023 1 commit
  22. 09 Nov, 2023 1 commit
  23. 08 Nov, 2023 1 commit
  24. 02 Nov, 2023 5 commits
  25. 04 Aug, 2023 1 commit
  26. 22 Jul, 2023 1 commit
  27. 19 Jul, 2023 1 commit
  28. 17 Jul, 2023 1 commit
  29. 14 Jul, 2023 1 commit