1. 10 Jul, 2025 1 commit
  2. 04 Jul, 2025 1 commit
    • Benjamin Bossan's avatar
      FIX set_lora_device when target layers differ (#11844) · 25279175
      Benjamin Bossan authored
      
      
      * FIX set_lora_device when target layers differ
      
      Resolves #11833
      
      Fixes a bug that occurs after calling set_lora_device when multiple LoRA
      adapters are loaded that target different layers.
      
      Note: Technically, the accompanying test does not require a GPU because
      the bug is triggered even if the parameters are already on the
      corresponding device, i.e. loading on CPU and then changing the device
      to CPU is sufficient to cause the bug. However, this may be optimized
      away in the future, so I decided to test with GPU.
      
      * Update docstring to warn about device mismatch
      
      * Extend docstring with an example
      
      * Fix docstring
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      25279175
  3. 27 Jun, 2025 1 commit
  4. 25 Jun, 2025 1 commit
  5. 24 Jun, 2025 2 commits
  6. 19 Jun, 2025 2 commits
  7. 13 Jun, 2025 1 commit
  8. 27 May, 2025 1 commit
  9. 19 May, 2025 1 commit
  10. 06 May, 2025 1 commit
  11. 08 Apr, 2025 1 commit
    • Benjamin Bossan's avatar
      [LoRA] Implement hot-swapping of LoRA (#9453) · fb544996
      Benjamin Bossan authored
      * [WIP][LoRA] Implement hot-swapping of LoRA
      
      This PR adds the possibility to hot-swap LoRA adapters. It is WIP.
      
      Description
      
      As of now, users can already load multiple LoRA adapters. They can
      offload existing adapters or they can unload them (i.e. delete them).
      However, they cannot "hotswap" adapters yet, i.e. substitute the weights
      from one LoRA adapter with the weights of another, without the need to
      create a separate LoRA adapter.
      
      Generally, hot-swapping may not appear not super useful but when the
      model is compiled, it is necessary to prevent recompilation. See #9279
      for more context.
      
      Caveats
      
      To hot-swap a LoRA adapter for another, these two adapters should target
      exactly the same layers and the "hyper-parameters" of the two adapters
      should be identical. For instance, the LoRA alpha has to be the same:
      Given that we keep the alpha from the first adapter, the LoRA scaling
      would be incorrect for the second adapter otherwise.
      
      Theoretically, we could override the scaling dict with the alpha values
      derived from the second adapter's config, but changing the dict will
      trigger a guard for recompilation, defeating the main purpose of the
      feature.
      
      I also found that compilation flags can have an impact on whether this
      works or not. E.g. when passing "reduce-overhead", there will be errors
      of the type:
      
      > input name: arg861_1. data pointer changed from 139647332027392 to
      139647331054592
      
      I don't know enough about compilation to determine whether this is
      problematic or not.
      
      Current state
      
      This is obviously WIP right now to collect feedback and discuss which
      direction to take this. If this PR turns out to be useful, the
      hot-swapping functions will be added to PEFT itself and can be imported
      here (or there is a separate copy in diffusers to avoid the need for a
      min PEFT version to use this feature).
      
      Moreover, more tests need to be added to better cover this feature,
      although we don't necessarily need tests for the hot-swapping
      functionality itself, since those tests will be added to PEFT.
      
      Furthermore, as of now, this is only implemented for the unet. Other
      pipeline components have yet to implement this feature.
      
      Finally, it should be properly documented.
      
      I would like to collect feedback on the current state of the PR before
      putting more time into finalizing it.
      
      * Reviewer feedback
      
      * Reviewer feedback, adjust test
      
      * Fix, doc
      
      * Make fix
      
      * Fix for possible g++ error
      
      * Add test for recompilation w/o hotswapping
      
      * Make hotswap work
      
      Requires https://github.com/huggingface/peft/pull/2366
      
      More changes to make hotswapping work. Together with the mentioned PEFT
      PR, the tests pass for me locally.
      
      List of changes:
      
      - docstring for hotswap
      - remove code copied from PEFT, import from PEFT now
      - adjustments to PeftAdapterMixin.load_lora_adapter (unfortunately, some
        state dict renaming was necessary, LMK if there is a better solution)
      - adjustments to UNet2DConditionLoadersMixin._process_lora: LMK if this
        is even necessary or not, I'm unsure what the overall relationship is
        between this and PeftAdapterMixin.load_lora_adapter
      - also in UNet2DConditionLoadersMixin._process_lora, I saw that there is
        no LoRA unloading when loading the adapter fails, so I added it
        there (in line with what happens in PeftAdapterMixin.load_lora_adapter)
      - rewritten tests to avoid shelling out, make the test more precise by
        making sure that the outputs align, parametrize it
      - also checked the pipeline code mentioned in this comment:
        https://github.com/huggingface/diffusers/pull/9453#issuecomment-2418508871;
      
      
        when running this inside the with
        torch._dynamo.config.patch(error_on_recompile=True) context, there is
        no error, so I think hotswapping is now working with pipelines.
      
      * Address reviewer feedback:
      
      - Revert deprecated method
      - Fix PEFT doc link to main
      - Don't use private function
      - Clarify magic numbers
      - Add pipeline test
      
      Moreover:
      - Extend docstrings
      - Extend existing test for outputs != 0
      - Extend existing test for wrong adapter name
      
      * Change order of test decorators
      
      parameterized.expand seems to ignore skip decorators if added in last
      place (i.e. innermost decorator).
      
      * Split model and pipeline tests
      
      Also increase test coverage by also targeting conv2d layers (support of
      which was added recently on the PEFT PR).
      
      * Reviewer feedback: Move decorator to test classes
      
      ... instead of having them on each test method.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Reviewer feedback: version check, TODO comment
      
      * Add enable_lora_hotswap method
      
      * Reviewer feedback: check _lora_loadable_modules
      
      * Revert changes in unet.py
      
      * Add possibility to ignore enabled at wrong time
      
      * Fix docstrings
      
      * Log possible PEFT error, test
      
      * Raise helpful error if hotswap not supported
      
      I.e. for the text encoder
      
      * Formatting
      
      * More linter
      
      * More ruff
      
      * Doc-builder complaint
      
      * Update docstring:
      
      - mention no text encoder support yet
      - make it clear that LoRA is meant
      - mention that same adapter name should be passed
      
      * Fix error in docstring
      
      * Update more methods with hotswap argument
      
      - SDXL
      - SD3
      - Flux
      
      No changes were made to load_lora_into_transformer.
      
      * Add hotswap argument to load_lora_into_transformer
      
      For SD3 and Flux. Use shorter docstring for brevity.
      
      * Extend docstrings
      
      * Add version guards to tests
      
      * Formatting
      
      * Fix LoRA loading call to add prefix=None
      
      See:
      https://github.com/huggingface/diffusers/pull/10187#issuecomment-2717571064
      
      
      
      * Run make fix-copies
      
      * Add hot swap documentation to the docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      fb544996
  12. 12 Mar, 2025 1 commit
  13. 10 Mar, 2025 1 commit
  14. 19 Feb, 2025 1 commit
    • Sayak Paul's avatar
      [LoRA] make `set_adapters()` robust on silent failures. (#9618) · 6fe05b9b
      Sayak Paul authored
      * make set_adapters() robust on silent failures.
      
      * fixes to tests
      
      * flaky decorator.
      
      * fix
      
      * flaky to sd3.
      
      * remove warning.
      
      * sort
      
      * quality
      
      * skip test_simple_inference_with_text_denoiser_multi_adapter_block_lora
      
      * skip testing unsupported features.
      
      * raise warning instead of error.
      6fe05b9b
  15. 09 Jan, 2025 1 commit
  16. 02 Nov, 2024 1 commit
  17. 31 Oct, 2024 2 commits
  18. 27 Sep, 2024 1 commit
  19. 13 Sep, 2024 1 commit
  20. 26 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Chore] add `LoraLoaderMixin` to the inits (#8981) · d87fe95f
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      * loraloadermixin.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      d87fe95f
  21. 25 Jul, 2024 2 commits
    • YiYi Xu's avatar
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976) · 62863bb1
      YiYi Xu authored
      Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)"
      
      This reverts commit 527430d0.
      62863bb1
    • Sayak Paul's avatar
      [LoRA] introduce LoraBaseMixin to promote reusability. (#8774) · 527430d0
      Sayak Paul authored
      
      
      * introduce  to promote reusability.
      
      * up
      
      * add more tests
      
      * up
      
      * remove comments.
      
      * fix fuse_nan test
      
      * clarify the scope of fuse_lora and unfuse_lora
      
      * remove space
      
      * rewrite fuse_lora a bit.
      
      * feedback
      
      * copy over load_lora_into_text_encoder.
      
      * address dhruv's feedback.
      
      * fix-copies
      
      * fix issubclass.
      
      * num_fused_loras
      
      * fix
      
      * fix
      
      * remove mapping
      
      * up
      
      * fix
      
      * style
      
      * fix-copies
      
      * change to SD3TransformerLoRALoadersMixin
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      * handle wuerstchen
      
      * up
      
      * move lora to lora_pipeline.py
      
      * up
      
      * fix-copies
      
      * fix documentation.
      
      * comment set_adapters().
      
      * fix-copies
      
      * fix set_adapters() at the model level.
      
      * fix?
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      527430d0
  22. 03 Jul, 2024 2 commits