1. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  2. 25 Jul, 2024 2 commits
    • YiYi Xu's avatar
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976) · 62863bb1
      YiYi Xu authored
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)"
      
      This reverts commit 527430d0.
      62863bb1
    • Sayak Paul's avatar
      [LoRA] introduce LoraBaseMixin to promote reusability. (#8774) · 527430d0
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      527430d0
  3. 08 Jul, 2024 1 commit
  4. 03 Jul, 2024 2 commits
  5. 25 Jun, 2024 1 commit
  6. 21 Jun, 2024 1 commit
  7. 20 Jun, 2024 1 commit
  8. 18 Jun, 2024 1 commit
    • Gæros's avatar
      [LoRA] text encoder: read the ranks for all the attn modules (#8324) · 298ce679
      Gæros authored
      
      
      * [LoRA] text encoder: read the ranks for all the attn modules
      
       * In addition to out_proj, read the ranks of adapters for q_proj, k_proj, and  v_proj
      
       * Allow missing adapters (UNet already supports this)
      
      * ruff format loaders.lora
      
      * [LoRA] add tests for partial text encoders LoRAs
      
      * [LoRA] update test_simple_inference_with_partial_text_lora to be deterministic
      
      * [LoRA] comment justifying test_simple_inference_with_partial_text_lora
      
      * style
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      298ce679
  9. 12 Jun, 2024 1 commit
  10. 29 May, 2024 1 commit
  11. 24 May, 2024 1 commit
  12. 13 May, 2024 1 commit
  13. 09 May, 2024 1 commit
  14. 07 May, 2024 1 commit
  15. 12 Apr, 2024 1 commit
  16. 29 Mar, 2024 2 commits
    • UmerHA's avatar
      Implements Blockwise lora (#7352) · 03024468
      UmerHA authored
      
      
      * Initial commit
      
      * Implemented block lora
      
      - implemented block lora
      - updated docs
      - added tests
      
      * Finishing up
      
      * Reverted unrelated changes made by make style
      
      * Fixed typo
      
      * Fixed bug + Made text_encoder_2 scalable
      
      * Integrated some review feedback
      
      * Incorporated review feedback
      
      * Fix tests
      
      * Made every module configurable
      
      * Adapter to new lora test structure
      
      * Final cleanup
      
      * Some more final fixes
      
      - Included examples in `using_peft_for_inference.md`
      - Added hint that only attns are scaled
      - Removed NoneTypes
      - Added test to check mismatching lens of adapter names / weights raise error
      
      * Update using_peft_for_inference.md
      
      * Update using_peft_for_inference.md
      
      * Make style, quality, fix-copies
      
      * Updated tutorial;Warning if scale/adapter mismatch
      
      * floats are forwarded as-is; changed tutorial scale
      
      * make style, quality, fix-copies
      
      * Fixed typo in tutorial
      
      * Moved some warnings into `lora_loader_utils.py`
      
      * Moved scale/lora mismatch warnings back
      
      * Integrated final review suggestions
      
      * Empty commit to trigger CI
      
      * Reverted emoty commit to trigger CI
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      03024468
    • Dhruv Nair's avatar
      Memory clean up on all Slow Tests (#7514) · 4d39b748
      Dhruv Nair authored
      
      
      * update
      
      * update
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      4d39b748
  17. 27 Mar, 2024 1 commit
  18. 26 Mar, 2024 1 commit
    • Sayak Paul's avatar
      feat: support DoRA LoRA from community (#7371) · 699dfb08
      Sayak Paul authored
      * feat: support dora loras from community
      
      * safe-guard dora operations under peft version.
      
      * pop use_dora when False
      
      * make dora lora from kohya work.
      
      * fix: kohya conversion utils.
      
      * add a fast test for DoRA compatibility..
      
      * add a nightly test.
      699dfb08
  19. 25 Mar, 2024 1 commit
  20. 20 Mar, 2024 1 commit
  21. 19 Mar, 2024 1 commit
  22. 27 Feb, 2024 2 commits
  23. 13 Feb, 2024 1 commit
  24. 09 Feb, 2024 1 commit
  25. 08 Feb, 2024 1 commit
  26. 22 Jan, 2024 1 commit
  27. 05 Jan, 2024 2 commits
  28. 04 Jan, 2024 5 commits
  29. 03 Jan, 2024 2 commits
    • Sayak Paul's avatar
      [LoRA deprecation] handle rest of the stuff related to deprecated lora stuff. (#6426) · d7001400
      Sayak Paul authored
      * handle rest of the stuff related to deprecated lora stuff.
      
      * fix: copies
      
      * don't modify the uNet in-place.
      
      * fix: temporal autoencoder.
      
      * manually remove lora layers.
      
      * don't copy unet.
      
      * alright
      
      * remove lora attn processors from unet3d
      
      * fix: unet3d.
      
      * styl
      
      * Empty-Commit
      d7001400
    • Sayak Paul's avatar
      [LoRA] add: test to check if peft loras are loadable in non-peft envs. (#6400) · 2e4dc3e2
      Sayak Paul authored
      * add: test to check if peft loras are loadable in non-peft envs.
      
      * add torch_device approrpiately.
      
      * fix: get_dummy_inputs().
      
      * test logits.
      
      * rename
      
      * debug
      
      * debug
      
      * fix: generator
      
      * new assertion values after fixing the seed.
      
      * shape
      
      * remove print statements and settle this.
      
      * to update values.
      
      * change values when lora config is initialized under a fixed seed.
      
      * update colab link
      
      * update notebook link
      
      * sanity restored by getting the exact same values without peft.
      2e4dc3e2
  30. 02 Jan, 2024 1 commit
    • Sayak Paul's avatar
      [LoRA] Remove the use of depcrecated loRA functionalities such as `LoRAAttnProcessor` (#6369) · 61f6c547
      Sayak Paul authored
      * start deprecating loraattn.
      
      * fix
      
      * wrap into unet_lora_state_dict
      
      * utilize text_encoder_lora_params
      
      * utilize text_encoder_attn_modules
      
      * debug
      
      * debug
      
      * remove print
      
      * don't use text encoder for test_stable_diffusion_lora
      
      * load the procs.
      
      * set_default_attn_processor
      
      * fix: set_default_attn_processor call.
      
      * fix: lora_components[unet_lora_params]
      
      * checking for 3d.
      
      * 3d.
      
      * more fixes.
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * more debug
      
      * more debug
      
      * more debug
      
      * more debug
      
      * more debug
      
      * more debug
      
      * hack.
      
      * remove comments and prep for a PR.
      
      * appropriate set_lora_weights()
      
      * fix
      
      * fix: test_unload_lora_sd
      
      * fix: test_unload_lora_sd
      
      * use dfault attebtion processors.
      
      * debu
      
      * debug nan
      
      * debug nan
      
      * debug nan
      
      * use NaN instead of inf
      
      * remove comments.
      
      * fix: test_text_encoder_lora_state_dict_unchanged
      
      * attention processor default
      
      * default attention processors.
      
      * default
      
      * style
      61f6c547