1. 25 Sep, 2025 1 commit
  2. 28 Aug, 2025 1 commit
  3. 18 Jun, 2025 1 commit
  4. 11 Jun, 2025 1 commit
  5. 10 Jun, 2025 1 commit
  6. 26 May, 2025 3 commits
  7. 15 Apr, 2025 1 commit
  8. 09 Apr, 2025 1 commit
  9. 08 Apr, 2025 1 commit
    • Benjamin Bossan's avatar
      [LoRA] Implement hot-swapping of LoRA (#9453) · fb544996
      Benjamin Bossan authored
      * [WIP][LoRA] Implement hot-swapping of LoRA
      
      This PR adds the possibility to hot-swap LoRA adapters. It is WIP.
      
      Description
      
      As of now, users can already load multiple LoRA adapters. They can
      offload existing adapters or they can unload them (i.e. delete them).
      However, they cannot "hotswap" adapters yet, i.e. substitute the weights
      from one LoRA adapter with the weights of another, without the need to
      create a separate LoRA adapter.
      
      Generally, hot-swapping may not appear not super useful but when the
      model is compiled, it is necessary to prevent recompilation. See #9279
      for more context.
      
      Caveats
      
      To hot-swap a LoRA adapter for another, these two adapters should target
      exactly the same layers and the "hyper-parameters" of the two adapters
      should be identical. For instance, the LoRA alpha has to be the same:
      Given that we keep the alpha from the first adapter, the LoRA scaling
      would be incorrect for the second adapter otherwise.
      
      Theoretically, we could override the scaling dict with the alpha values
      derived from the second adapter's config, but changing the dict will
      trigger a guard for recompilation, defeating the main purpose of the
      feature.
      
      I also found that compilation flags can have an impact on whether this
      works or not. E.g. when passing "reduce-overhead", there will be errors
      of the type:
      
      > input name: arg861_1. data pointer changed from 139647332027392 to
      139647331054592
      
      I don't know enough about compilation to determine whether this is
      problematic or not.
      
      Current state
      
      This is obviously WIP right now to collect feedback and discuss which
      direction to take this. If this PR turns out to be useful, the
      hot-swapping functions will be added to PEFT itself and can be imported
      here (or there is a separate copy in diffusers to avoid the need for a
      min PEFT version to use this feature).
      
      Moreover, more tests need to be added to better cover this feature,
      although we don't necessarily need tests for the hot-swapping
      functionality itself, since those tests will be added to PEFT.
      
      Furthermore, as of now, this is only implemented for the unet. Other
      pipeline components have yet to implement this feature.
      
      Finally, it should be properly documented.
      
      I would like to collect feedback on the current state of the PR before
      putting more time into finalizing it.
      
      * Reviewer feedback
      
      * Reviewer feedback, adjust test
      
      * Fix, doc
      
      * Make fix
      
      * Fix for possible g++ error
      
      * Add test for recompilation w/o hotswapping
      
      * Make hotswap work
      
      Requires https://github.com/huggingface/peft/pull/2366
      
      More changes to make hotswapping work. Together with the mentioned PEFT
      PR, the tests pass for me locally.
      
      List of changes:
      
      - docstring for hotswap
      - remove code copied from PEFT, import from PEFT now
      - adjustments to PeftAdapterMixin.load_lora_adapter (unfortunately, some
        state dict renaming was necessary, LMK if there is a better solution)
      - adjustments to UNet2DConditionLoadersMixin._process_lora: LMK if this
        is even necessary or not, I'm unsure what the overall relationship is
        between this and PeftAdapterMixin.load_lora_adapter
      - also in UNet2DConditionLoadersMixin._process_lora, I saw that there is
        no LoRA unloading when loading the adapter fails, so I added it
        there (in line with what happens in PeftAdapterMixin.load_lora_adapter)
      - rewritten tests to avoid shelling out, make the test more precise by
        making sure that the outputs align, parametrize it
      - also checked the pipeline code mentioned in this comment:
        https://github.com/huggingface/diffusers/pull/9453#issuecomment-2418508871;
      
      
        when running this inside the with
        torch._dynamo.config.patch(error_on_recompile=True) context, there is
        no error, so I think hotswapping is now working with pipelines.
      
      * Address reviewer feedback:
      
      - Revert deprecated method
      - Fix PEFT doc link to main
      - Don't use private function
      - Clarify magic numbers
      - Add pipeline test
      
      Moreover:
      - Extend docstrings
      - Extend existing test for outputs != 0
      - Extend existing test for wrong adapter name
      
      * Change order of test decorators
      
      parameterized.expand seems to ignore skip decorators if added in last
      place (i.e. innermost decorator).
      
      * Split model and pipeline tests
      
      Also increase test coverage by also targeting conv2d layers (support of
      which was added recently on the PEFT PR).
      
      * Reviewer feedback: Move decorator to test classes
      
      ... instead of having them on each test method.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Reviewer feedback: version check, TODO comment
      
      * Add enable_lora_hotswap method
      
      * Reviewer feedback: check _lora_loadable_modules
      
      * Revert changes in unet.py
      
      * Add possibility to ignore enabled at wrong time
      
      * Fix docstrings
      
      * Log possible PEFT error, test
      
      * Raise helpful error if hotswap not supported
      
      I.e. for the text encoder
      
      * Formatting
      
      * More linter
      
      * More ruff
      
      * Doc-builder complaint
      
      * Update docstring:
      
      - mention no text encoder support yet
      - make it clear that LoRA is meant
      - mention that same adapter name should be passed
      
      * Fix error in docstring
      
      * Update more methods with hotswap argument
      
      - SDXL
      - SD3
      - Flux
      
      No changes were made to load_lora_into_transformer.
      
      * Add hotswap argument to load_lora_into_transformer
      
      For SD3 and Flux. Use shorter docstring for brevity.
      
      * Extend docstrings
      
      * Add version guards to tests
      
      * Formatting
      
      * Fix LoRA loading call to add prefix=None
      
      See:
      https://github.com/huggingface/diffusers/pull/10187#issuecomment-2717571064
      
      
      
      * Run make fix-copies
      
      * Add hot swap documentation to the docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      fb544996
  10. 04 Mar, 2025 1 commit
    • Fanli Lin's avatar
      [tests] make tests device-agnostic (part 4) (#10508) · 7855ac59
      Fanli Lin authored
      
      
      * initial comit
      
      * fix empty cache
      
      * fix one more
      
      * fix style
      
      * update device functions
      
      * update
      
      * update
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/controlnet/test_controlnet.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/controlnet/test_controlnet.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * with gc.collect
      
      * update
      
      * make style
      
      * check_torch_dependencies
      
      * add mps empty cache
      
      * add changes
      
      * bug fix
      
      * enable on xpu
      
      * update more cases
      
      * revert
      
      * revert back
      
      * Update test_stable_diffusion_xl.py
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * add test marker
      
      ---------
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      7855ac59
  11. 21 Jan, 2025 1 commit
    • Fanli Lin's avatar
      [tests] make tests device-agnostic (part 3) (#10437) · ec37e209
      Fanli Lin authored
      
      
      * initial comit
      
      * fix empty cache
      
      * fix one more
      
      * fix style
      
      * update device functions
      
      * update
      
      * update
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/controlnet/test_controlnet.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/controlnet/test_controlnet.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * with gc.collect
      
      * update
      
      * make style
      
      * check_torch_dependencies
      
      * add mps empty cache
      
      * bug fix
      
      * Apply suggestions from code review
      
      ---------
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      ec37e209
  12. 14 Jan, 2025 1 commit
    • Marc Sun's avatar
      [FEAT] DDUF format (#10037) · fbff43ac
      Marc Sun authored
      
      
      * load and save dduf archive
      
      * style
      
      * switch to zip uncompressed
      
      * updates
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * first draft
      
      * remove print
      
      * switch to dduf_file for consistency
      
      * switch to huggingface hub api
      
      * fix log
      
      * add a basic test
      
      * Update src/diffusers/configuration_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix
      
      * fix variant
      
      * change saving logic
      
      * DDUF - Load transformers components manually (#10171)
      
      * update hfh version
      
      * Load transformers components manually
      
      * load encoder from_pretrained with state_dict
      
      * working version with transformers and tokenizer !
      
      * add generation_config case
      
      * fix tests
      
      * remove saving for now
      
      * typing
      
      * need next version from transformers
      
      * Update src/diffusers/configuration_utils.py
      Co-authored-by: default avatarLucain <lucain@huggingface.co>
      
      * check path corectly
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLucain <lucain@huggingface.co>
      
      * udapte
      
      * typing
      
      * remove check for subfolder
      
      * quality
      
      * revert setup changes
      
      * oups
      
      * more readable condition
      
      * add loading from the hub test
      
      * add basic docs.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLucain <lucain@huggingface.co>
      
      * add example
      
      * add
      
      * make functions private
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * minor.
      
      * fixes
      
      * fix
      
      * change the precdence of parameterized.
      
      * error out when custom pipeline is passed with dduf_file.
      
      * updates
      
      * fix
      
      * updates
      
      * fixes
      
      * updates
      
      * fix xfail condition.
      
      * fix xfail
      
      * fixes
      
      * sharded checkpoint compat
      
      * add test for sharded checkpoint
      
      * add suggestions
      
      * Update src/diffusers/models/model_loading_utils.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * from suggestions
      
      * add class attributes to flag dduf tests
      
      * last one
      
      * fix logic
      
      * remove comment
      
      * revert changes
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarLucain <lucain@huggingface.co>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      fbff43ac
  13. 19 Dec, 2024 1 commit
  14. 19 Oct, 2024 1 commit
  15. 28 Sep, 2024 1 commit
    • Sayak Paul's avatar
      [Core] fix variant-identification. (#9253) · 11542431
      Sayak Paul authored
      
      
      * fix variant-idenitification.
      
      * fix variant
      
      * fix sharded variant checkpoint loading.
      
      * Apply suggestions from code review
      
      * fixes.
      
      * more fixes.
      
      * remove print.
      
      * fixes
      
      * fixes
      
      * comments
      
      * fixes
      
      * apply suggestions.
      
      * hub_utils.py
      
      * fix test
      
      * updates
      
      * fixes
      
      * fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * updates.
      
      * removep patch file.
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      11542431
  16. 19 Aug, 2024 1 commit
  17. 24 Jul, 2024 1 commit
  18. 26 Jun, 2024 1 commit
  19. 07 Jun, 2024 1 commit
    • Sayak Paul's avatar
      [Core] support saving and loading of sharded checkpoints (#7830) · 7d887118
      Sayak Paul authored
      
      
      * feat: support saving a model in sharded checkpoints.
      
      * feat: make loading of sharded checkpoints work.
      
      * add tests
      
      * cleanse the loading logic a bit more.
      
      * more resilience while loading from the Hub.
      
      * parallelize shard downloads by using snapshot_download()/
      
      * default to a shard size.
      
      * more fix
      
      * Empty-Commit
      
      * debug
      
      * fix
      
      * uality
      
      * more debugging
      
      * fix more
      
      * initial comments from Benjamin
      
      * move certain methods to loading_utils
      
      * add test to check if the correct number of shards are present.
      
      * add a test to check if loading of sharded checkpoints from the Hub is okay
      
      * clarify the unit when passed as an int.
      
      * use hf_hub for sharding.
      
      * remove unnecessary code
      
      * remove unnecessary function
      
      * lucain's comments.
      
      * fixes
      
      * address high-level comments.
      
      * fix test
      
      * subfolder shenanigans./
      
      * Update src/diffusers/utils/hub_utils.py
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      
      * remove _huggingface_hub_version as not needed.
      
      * address more feedback.
      
      * add a test for local_files_only=True/
      
      * need hf hub to be at least 0.23.2
      
      * style
      
      * final comment.
      
      * clean up subfolder.
      
      * deal with suffixes in code.
      
      * _add_variant default.
      
      * use weights_name_pattern
      
      * remove add_suffix_keyword
      
      * clean up downloading of sharded ckpts.
      
      * don't return something special when using index.json
      
      * fix more
      
      * don't use bare except
      
      * remove comments and catch the errors better
      
      * fix a couple of things when using is_file()
      
      * empty
      
      ---------
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      7d887118
  20. 29 Mar, 2024 1 commit
  21. 26 Mar, 2024 1 commit
  22. 25 Mar, 2024 1 commit
  23. 14 Mar, 2024 1 commit
    • M. Tolga Cangöz's avatar
      [`Tests`] Update a deprecated parameter in test files and fix several typos (#7277) · 5d848ec0
      M. Tolga Cangöz authored
      * Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline`
      
      * Fix variable name typo and update comments
      
      * Update deprecated `output_type="numpy"` to "np" in test files
      
      * Discard changes to src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py
      
      * Update test_stable_diffusion_panorama.py
      
      * Update numbers in README.md
      
      * Update get_guidance_scale_embedding method to use timesteps instead of w
      
      * Update number of checkpoints in README.md
      
      * Add type hints and fix var name
      
      * Fix PyTorch's convention for inplace functions
      
      * Fix a typo
      
      * Revert "Fix PyTorch's convention for inplace functions"
      
      This reverts commit 74350cf65b2c9aa77f08bec7937d7a8b13edb509.
      
      * Fix typos
      
      * Indent
      
      * Refactor get_guidance_scale_embedding method in LEditsPPPipelineStableDiffusionXL class
      5d848ec0
  24. 08 Feb, 2024 2 commits
  25. 27 Nov, 2023 1 commit
  26. 21 Nov, 2023 1 commit
  27. 10 Nov, 2023 1 commit
  28. 26 Oct, 2023 1 commit
  29. 26 Sep, 2023 1 commit
  30. 25 Sep, 2023 2 commits
  31. 11 Sep, 2023 2 commits
    • Patrick von Platen's avatar
      Make sure Flax pipelines can be loaded into PyTorch (#4971) · 6bbee104
      Patrick von Platen authored
      * Make sure Flax pipelines can be loaded into PyTorch
      
      * add test
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      6bbee104
    • Dhruv Nair's avatar
      Lazy Import for Diffusers (#4829) · b6e0b016
      Dhruv Nair authored
      
      
      * initial commit
      
      * move modules to import struct
      
      * add dummy objects and _LazyModule
      
      * add lazy import to schedulers
      
      * clean up unused imports
      
      * lazy import on models module
      
      * lazy import for schedulers module
      
      * add lazy import to pipelines module
      
      * lazy import altdiffusion
      
      * lazy import audio diffusion
      
      * lazy import audioldm
      
      * lazy import consistency model
      
      * lazy import controlnet
      
      * lazy import dance diffusion ddim ddpm
      
      * lazy import deepfloyd
      
      * lazy import kandinksy
      
      * lazy imports
      
      * lazy import semantic diffusion
      
      * lazy imports
      
      * lazy import stable diffusion
      
      * move sd output to its own module
      
      * clean up
      
      * lazy import t2iadapter
      
      * lazy import unclip
      
      * lazy import versatile and vq diffsuion
      
      * lazy import vq diffusion
      
      * helper to fetch objects from modules
      
      * lazy import sdxl
      
      * lazy import txt2vid
      
      * lazy import stochastic karras
      
      * fix model imports
      
      * fix bug
      
      * lazy import
      
      * clean up
      
      * clean up
      
      * fixes for tests
      
      * fixes for tests
      
      * clean up
      
      * remove import of torch_utils from utils module
      
      * clean up
      
      * clean up
      
      * fix mistake import statement
      
      * dedicated modules for exporting and loading
      
      * remove testing utils from utils module
      
      * fixes from  merge conflicts
      
      * Update src/diffusers/pipelines/kandinsky2_2/__init__.py
      
      * fix docs
      
      * fix alt diffusion copied from
      
      * fix check dummies
      
      * fix more docs
      
      * remove accelerate import from utils module
      
      * add type checking
      
      * make style
      
      * fix check dummies
      
      * remove torch import from xformers check
      
      * clean up error message
      
      * fixes after upstream merges
      
      * dummy objects fix
      
      * fix tests
      
      * remove unused module import
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      b6e0b016
  32. 24 Aug, 2023 1 commit
  33. 17 Aug, 2023 2 commits
  34. 11 Aug, 2023 1 commit