1. 28 Dec, 2023 1 commit
  2. 24 Dec, 2023 1 commit
  3. 18 Dec, 2023 1 commit
  4. 07 Dec, 2023 1 commit
    • Younes Belkada's avatar
      [`PEFT`] Adapt example scripts to use PEFT (#5388) · c2717317
      Younes Belkada authored
      
      
      * adapt example scripts to use PEFT
      
      * Update examples/text_to_image/train_text_to_image_lora.py
      
      * fix
      
      * add for SDXL
      
      * oops
      
      * make sure to install peft
      
      * fix
      
      * fix
      
      * fix dreambooth and lora
      
      * more fixes
      
      * add peft to requirements.txt
      
      * fix
      
      * final fix
      
      * add peft version in requirements
      
      * remove comment
      
      * change variable names
      
      * add few lines in readme
      
      * add to reqs
      
      * style
      
      * fix issues
      
      * fix lora dreambooth xl tests
      
      * init_lora_weights to gaussian and add out proj where missing
      
      * ammend requirements.
      
      * ammend requirements.txt
      
      * add correct peft versions
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      c2717317
  5. 01 Dec, 2023 1 commit
  6. 27 Nov, 2023 1 commit
  7. 21 Nov, 2023 1 commit
    • Patrick von Platen's avatar
      [Lora] Seperate logic (#5809) · 13d73d93
      Patrick von Platen authored
      * [Lora] Seperate logic
      
      * [Lora] Seperate logic
      
      * [Lora] Seperate logic
      
      * add comments to explain the code better
      
      * add comments to explain the code better
      13d73d93
  8. 17 Nov, 2023 1 commit
  9. 14 Nov, 2023 1 commit
    • Sayak Paul's avatar
      [Refactor] refactor `loaders.py` to make it cleaner and leaner. (#5771) · ded93f79
      Sayak Paul authored
      
      
      * refactor loaders.py to make it cleaner and leaner.
      
      * refactor loaders init
      
      * inits.
      
      * textual inversion to the init.
      
      * inits.
      
      * remove certain modules from the main init.
      
      * AttnProcsLayers
      
      * fix imports
      
      * avoid circular import.
      
      * fix circular import pt 2.
      
      * address PR comments
      
      * imports
      
      * fix: imports.
      
      * remove from main init for avoiding circular deps.
      
      * remove spurious deps.
      
      * fix-copies.
      
      * fix imports.
      
      * more debug
      
      * more debug
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      ded93f79
  10. 10 Nov, 2023 1 commit
  11. 06 Nov, 2023 1 commit
  12. 16 Oct, 2023 1 commit
  13. 11 Oct, 2023 1 commit
  14. 08 Oct, 2023 1 commit
  15. 02 Oct, 2023 1 commit
    • Sayak Paul's avatar
      fix: how print training resume logs. (#5117) · d56825e4
      Sayak Paul authored
      
      
      * fix: how print training resume logs.
      
      * propagate changes to text-to-image scripts.
      
      * propagate changes to instructpix2pix.
      
      * propagate changes to dreambooth
      
      * propagate changes to custom diffusion and instructpix2pix
      
      * propagate changes to kandinsky
      
      * propagate changes to textual inv.
      
      * debug
      
      * fix: checkpointing.
      
      * debug
      
      * debug
      
      * debug
      
      * back to the square
      
      * debug
      
      * debug
      
      * change condition order.
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * revert to original
      
      * clean
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d56825e4
  16. 14 Sep, 2023 1 commit
  17. 08 Sep, 2023 1 commit
  18. 25 Aug, 2023 1 commit
  19. 17 Aug, 2023 2 commits
  20. 04 Aug, 2023 1 commit
  21. 01 Aug, 2023 1 commit
  22. 28 Jul, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] Support SDXL Kohya-style LoRA (#4287) · 4a4cdd6b
      Sayak Paul authored
      
      
      * sdxl lora changes.
      
      * better name replacement.
      
      * better replacement.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * remove print.
      
      * print state dict keys.
      
      * print
      
      * distingisuih better
      
      * debuggable.
      
      * fxi: tyests
      
      * fix: arg from training script.
      
      * access from class.
      
      * run style
      
      * debug
      
      * save intermediate
      
      * some simplifications for SDXL LoRA
      
      * styling
      
      * unet config is not needed in diffusers format.
      
      * fix: dynamic SGM block mapping for SDXL kohya loras (#4322)
      
      * Use lora compatible layers for linear proj_in/proj_out (#4323)
      
      * improve condition for using the sgm_diffusers mapping
      
      * informative comment.
      
      * load compatible keys and embedding layer maaping.
      
      * Get SDXL 1.0 example lora to load
      
      * simplify
      
      * specif ranks and hidden sizes.
      
      * better handling of k rank and hidden
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * fix: alpha keys
      
      * add check for handling LoRAAttnAddedKVProcessor
      
      * sanity comment
      
      * modifications for text encoder SDXL
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * denugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * unneeded comments.
      
      * unneeded comments.
      
      * kwargs for the other attention processors.
      
      * kwargs for the other attention processors.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * improve
      
      * debugging
      
      * debugging
      
      * more print
      
      * Fix alphas
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * clean up
      
      * clean up.
      
      * debugging
      
      * fix: text
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarBatuhan Taskaya <batuhan@python.org>
      4a4cdd6b
  23. 27 Jul, 2023 1 commit
  24. 26 Jul, 2023 1 commit
  25. 25 Jul, 2023 1 commit
  26. 18 Jul, 2023 1 commit
  27. 13 Jul, 2023 1 commit
  28. 11 Jul, 2023 1 commit
  29. 09 Jul, 2023 2 commits
    • Patrick von Platen's avatar
      make style · 4a3e5748
      Patrick von Platen authored
      4a3e5748
    • Will Berman's avatar
      Refactor LoRA (#3778) · c2a28c34
      Will Berman authored
      
      
      * refactor to support patching LoRA into T5
      
      instantiate the lora linear layer on the same device as the regular linear layer
      
      get lora rank from state dict
      
      tests
      
      fmt
      
      can create lora layer in float32 even when rest of model is float16
      
      fix loading model hook
      
      remove load_lora_weights_ and T5 dispatching
      
      remove Unet#attn_processors_state_dict
      
      docstrings
      
      * text encoder monkeypatch class method
      
      * fix test
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      c2a28c34
  30. 07 Jul, 2023 1 commit
  31. 15 Jun, 2023 2 commits
  32. 08 Jun, 2023 2 commits
  33. 06 Jun, 2023 1 commit
    • Sayak Paul's avatar
      [LoRA] feat: add lora attention processor for pt 2.0. (#3594) · 8669e831
      Sayak Paul authored
      * feat: add lora attention processor for pt 2.0.
      
      * explicit context manager for SDPA.
      
      * switch to flash attention
      
      * make shapes compatible to work optimally with SDPA.
      
      * fix: circular import problem.
      
      * explicitly specify the flash attention kernel in sdpa
      
      * fall back to efficient attention context manager.
      
      * remove explicit dispatch.
      
      * fix: removed processor.
      
      * fix: remove optional from type annotation.
      
      * feat: make changes regarding LoRAAttnProcessor2_0.
      
      * remove confusing warning.
      
      * formatting.
      
      * relax tolerance for PT 2.0
      
      * fix: loading message.
      
      * remove unnecessary logging.
      
      * add: entry to the docs.
      
      * add: network_alpha argument.
      
      * relax tolerance.
      8669e831
  34. 05 Jun, 2023 2 commits
  35. 02 Jun, 2023 1 commit
    • Takuma Mori's avatar
      Support Kohya-ss style LoRA file format (in a limited capacity) (#3437) · 8e552bb4
      Takuma Mori authored
      
      
      * add _convert_kohya_lora_to_diffusers
      
      * make style
      
      * add scaffold
      
      * match result: unet attention only
      
      * fix monkey-patch for text_encoder
      
      * with CLIPAttention
      
      While the terrible images are no longer produced,
      the results do not match those from the hook ver.
      This may be due to not setting the network_alpha value.
      
      * add to support network_alpha
      
      * generate diff image
      
      * fix monkey-patch for text_encoder
      
      * add test_text_encoder_lora_monkey_patch()
      
      * verify that it's okay to release the attn_procs
      
      * fix closure version
      
      * add comment
      
      * Revert "fix monkey-patch for text_encoder"
      
      This reverts commit bb9c61e6faecc1935c9c4319c77065837655d616.
      
      * Fix to reuse utility functions
      
      * make LoRAAttnProcessor targets to self_attn
      
      * fix LoRAAttnProcessor target
      
      * make style
      
      * fix split key
      
      * Update src/diffusers/loaders.py
      
      * remove TEXT_ENCODER_TARGET_MODULES loop
      
      * add print memory usage
      
      * remove test_kohya_loras_scaffold.py
      
      * add: doc on LoRA civitai
      
      * remove print statement and refactor in the doc.
      
      * fix state_dict test for kohya-ss style lora
      
      * Apply suggestions from code review
      Co-authored-by: default avatarTakuma Mori <takuma104@gmail.com>
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e552bb4