1. 24 Jan, 2024 2 commits
  2. 23 Jan, 2024 4 commits
  3. 17 Jan, 2024 1 commit
    • Benjamin Warner's avatar
      Initial FSDP Support for QLoRA Finetuning (#970) · dcfb6f81
      Benjamin Warner authored
      
      
      This PR adds initial FSDP support for training QLoRA models. It enables basic FSDP and CPU Offload support, with low memory training via FSDP.sync_module_states option unsupported.
      
      This PR builds off of #840 commit 8278fca and BNB FSDP by @TimDettmers and @Titus-von-Koeller.
      
      An example of using this PR to finetune QLoRA models with FSDP can be found in the demo repo: AnswerDotAi/fsdp_qlora.
      
      * Minimal changes for fp32 4bit storage from BNB commit 8278fca
      
      * Params4bit with selectable storage dtype
      
      * possible fix for double quantizing linear weight & quant storage dtype
      
      * minor fixes in Params4bit for peft tests
      
      * remove redundant
      
      * add float16
      
      * update test
      
      * Remove float16 quant cast as there are fp32, bf16, & fp16 quant kernels
      
      ---------
      Co-authored-by: default avatarKerem Turgutlu <keremturgutlu@gmail.com>
      dcfb6f81
  4. 15 Jan, 2024 1 commit
  5. 12 Jan, 2024 3 commits
  6. 08 Jan, 2024 2 commits
  7. 07 Jan, 2024 1 commit
  8. 02 Jan, 2024 5 commits
  9. 01 Jan, 2024 13 commits
  10. 19 Dec, 2023 3 commits
  11. 11 Dec, 2023 5 commits