• Benjamin Bossan's avatar
    [LoRA] Implement hot-swapping of LoRA (#9453) · fb544996
    Benjamin Bossan authored
    * [WIP][LoRA] Implement hot-swapping of LoRA
    
    This PR adds the possibility to hot-swap LoRA adapters. It is WIP.
    
    Description
    
    As of now, users can already load multiple LoRA adapters. They can
    offload existing adapters or they can unload them (i.e. delete them).
    However, they cannot "hotswap" adapters yet, i.e. substitute the weights
    from one LoRA adapter with the weights of another, without the need to
    create a separate LoRA adapter.
    
    Generally, hot-swapping may not appear not super useful but when the
    model is compiled, it is necessary to prevent recompilation. See #9279
    for more context.
    
    Caveats
    
    To hot-swap a LoRA adapter for another, these two adapters should target
    exactly the same layers and the "hyper-parameters" of the two adapters
    should be identical. For instance, the LoRA alpha has to be the same:
    Given that we keep the alpha from the first adapter, the LoRA scaling
    would be incorrect for the second adapter otherwise.
    
    Theoretically, we could override the scaling dict with the alpha values
    derived from the second adapter's config, but changing the dict will
    trigger a guard for recompilation, defeating the main purpose of the
    feature.
    
    I also found that compilation flags can have an impact on whether this
    works or not. E.g. when passing "reduce-overhead", there will be errors
    of the type:
    
    > input name: arg861_1. data pointer changed from 139647332027392 to
    139647331054592
    
    I don't know enough about compilation to determine whether this is
    problematic or not.
    
    Current state
    
    This is obviously WIP right now to collect feedback and discuss which
    direction to take this. If this PR turns out to be useful, the
    hot-swapping functions will be added to PEFT itself and can be imported
    here (or there is a separate copy in diffusers to avoid the need for a
    min PEFT version to use this feature).
    
    Moreover, more tests need to be added to better cover this feature,
    although we don't necessarily need tests for the hot-swapping
    functionality itself, since those tests will be added to PEFT.
    
    Furthermore, as of now, this is only implemented for the unet. Other
    pipeline components have yet to implement this feature.
    
    Finally, it should be properly documented.
    
    I would like to collect feedback on the current state of the PR before
    putting more time into finalizing it.
    
    * Reviewer feedback
    
    * Reviewer feedback, adjust test
    
    * Fix, doc
    
    * Make fix
    
    * Fix for possible g++ error
    
    * Add test for recompilation w/o hotswapping
    
    * Make hotswap work
    
    Requires https://github.com/huggingface/peft/pull/2366
    
    More changes to make hotswapping work. Together with the mentioned PEFT
    PR, the tests pass for me locally.
    
    List of changes:
    
    - docstring for hotswap
    - remove code copied from PEFT, import from PEFT now
    - adjustments to PeftAdapterMixin.load_lora_adapter (unfortunately, some
      state dict renaming was necessary, LMK if there is a better solution)
    - adjustments to UNet2DConditionLoadersMixin._process_lora: LMK if this
      is even necessary or not, I'm unsure what the overall relationship is
      between this and PeftAdapterMixin.load_lora_adapter
    - also in UNet2DConditionLoadersMixin._process_lora, I saw that there is
      no LoRA unloading when loading the adapter fails, so I added it
      there (in line with what happens in PeftAdapterMixin.load_lora_adapter)
    - rewritten tests to avoid shelling out, make the test more precise by
      making sure that the outputs align, parametrize it
    - also checked the pipeline code mentioned in this comment:
      https://github.com/huggingface/diffusers/pull/9453#issuecomment-2418508871;
    
    
      when running this inside the with
      torch._dynamo.config.patch(error_on_recompile=True) context, there is
      no error, so I think hotswapping is now working with pipelines.
    
    * Address reviewer feedback:
    
    - Revert deprecated method
    - Fix PEFT doc link to main
    - Don't use private function
    - Clarify magic numbers
    - Add pipeline test
    
    Moreover:
    - Extend docstrings
    - Extend existing test for outputs != 0
    - Extend existing test for wrong adapter name
    
    * Change order of test decorators
    
    parameterized.expand seems to ignore skip decorators if added in last
    place (i.e. innermost decorator).
    
    * Split model and pipeline tests
    
    Also increase test coverage by also targeting conv2d layers (support of
    which was added recently on the PEFT PR).
    
    * Reviewer feedback: Move decorator to test classes
    
    ... instead of having them on each test method.
    
    * Apply suggestions from code review
    Co-authored-by: default avatarhlky <hlky@hlky.ac>
    
    * Reviewer feedback: version check, TODO comment
    
    * Add enable_lora_hotswap method
    
    * Reviewer feedback: check _lora_loadable_modules
    
    * Revert changes in unet.py
    
    * Add possibility to ignore enabled at wrong time
    
    * Fix docstrings
    
    * Log possible PEFT error, test
    
    * Raise helpful error if hotswap not supported
    
    I.e. for the text encoder
    
    * Formatting
    
    * More linter
    
    * More ruff
    
    * Doc-builder complaint
    
    * Update docstring:
    
    - mention no text encoder support yet
    - make it clear that LoRA is meant
    - mention that same adapter name should be passed
    
    * Fix error in docstring
    
    * Update more methods with hotswap argument
    
    - SDXL
    - SD3
    - Flux
    
    No changes were made to load_lora_into_transformer.
    
    * Add hotswap argument to load_lora_into_transformer
    
    For SD3 and Flux. Use shorter docstring for brevity.
    
    * Extend docstrings
    
    * Add version guards to tests
    
    * Formatting
    
    * Fix LoRA loading call to add prefix=None
    
    See:
    https://github.com/huggingface/diffusers/pull/10187#issuecomment-2717571064
    
    
    
    * Run make fix-copies
    
    * Add hot swap documentation to the docs
    
    * Apply suggestions from code review
    Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
    
    ---------
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    Co-authored-by: default avatarhlky <hlky@hlky.ac>
    Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
    Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
    fb544996
loading_adapters.md 21.6 KB