1. 08 May, 2025 1 commit
  2. 05 May, 2025 1 commit
  3. 01 May, 2025 1 commit
  4. 21 Apr, 2025 1 commit
  5. 15 Apr, 2025 1 commit
  6. 09 Apr, 2025 1 commit
  7. 08 Apr, 2025 1 commit
  8. 04 Mar, 2025 1 commit
  9. 24 Feb, 2025 1 commit
  10. 21 Jan, 2025 1 commit
  11. 23 Dec, 2024 1 commit
  12. 08 Nov, 2024 1 commit
  13. 31 Oct, 2024 1 commit
  14. 22 Oct, 2024 1 commit
  15. 14 Sep, 2024 1 commit
  16. 14 Aug, 2024 1 commit
  17. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  18. 25 Jul, 2024 2 commits
    • YiYi Xu's avatar
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976) · 62863bb1
      YiYi Xu authored
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)"
      
      This reverts commit 527430d0.
      62863bb1
    • Sayak Paul's avatar
      [LoRA] introduce LoraBaseMixin to promote reusability. (#8774) · 527430d0
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      527430d0
  19. 24 Jun, 2024 1 commit
  20. 18 Jun, 2024 1 commit
  21. 13 Jun, 2024 1 commit
  22. 29 May, 2024 1 commit
  23. 20 May, 2024 1 commit
  24. 02 Apr, 2024 1 commit
    • Bagheera's avatar
      7529 do not disable autocast for cuda devices (#7530) · 8e963d1c
      Bagheera authored
      
      
      * 7529 do not disable autocast for cuda devices
      
      * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
      
      * add autocast fix to other training examples
      
      * disable native_amp for dreambooth (sdxl)
      
      * disable native_amp for pix2pix (sdxl)
      
      * remove tests from remaining files
      
      * disable native_amp on huggingface accelerator for every training example that uses it
      
      * convert more usages of autocast to nullcontext, make style fixes
      
      * make style fixes
      
      * style.
      
      * Empty-Commit
      
      ---------
      Co-authored-by: default avatarbghira <bghira@users.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e963d1c
  25. 28 Mar, 2024 1 commit
  26. 26 Mar, 2024 1 commit
  27. 18 Mar, 2024 1 commit
  28. 13 Mar, 2024 1 commit
  29. 08 Mar, 2024 1 commit
  30. 07 Mar, 2024 2 commits
  31. 04 Mar, 2024 1 commit
    • Linoy Tsaban's avatar
      [training scripts] add tags of diffusers-training (#7206) · 8da360aa
      Linoy Tsaban authored
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add tags for diffusers training
      
      * add dora tags for drambooth lora scripts
      
      * style
      8da360aa
  32. 03 Mar, 2024 1 commit
    • Sayak Paul's avatar
      Support EDM-style training in DreamBooth LoRA SDXL script (#7126) · ccb93dca
      Sayak Paul authored
      
      
      * add: dreambooth lora script for Playground v2.5
      
      * fix: kwarg
      
      * address suraj's comments.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * apply suraj's suggestion
      
      * incorporate changes in the canonical script./
      
      * tracker naming
      
      * fix: schedule determination
      
      * add: two simple tests
      
      * remove playground script
      
      * note about edm-style training
      
      * address pedro's comments.
      
      * address part of Suraj's comments.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * remove guidance_scale.
      
      * use mse_loss.
      
      * add comments for preconditioning.
      
      * quality
      
      * Update examples/dreambooth/train_dreambooth_lora_sdxl.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * tackle v-pred.
      
      * Empty-Commit
      
      * support edm for sdxl too.
      
      * address suraj's comments.
      
      * Empty-Commit
      
      ---------
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      ccb93dca
  33. 01 Mar, 2024 1 commit
  34. 28 Feb, 2024 1 commit
  35. 26 Feb, 2024 1 commit
  36. 09 Feb, 2024 2 commits
  37. 08 Feb, 2024 1 commit