- 14 Apr, 2025 1 commit
-
-
Beinsezii authored
Update community_projects.md https://github.com/huggingface/diffusers/discussions/11158#discussioncomment-12681691
-
- 13 Apr, 2025 2 commits
-
-
Ishan Modi authored
* added controlnet for sana transformer * improve code quality * addressed PR comments * bug fixes * added test cases * update * added dummy objects * addressed PR comments * update * Forcing update * add to docs * code quality * addressed PR comments * addressed PR comments * update * addressed PR comments * added proper styling * update * Revert "added proper styling" This reverts commit 344ee8a7014ada095b295034ef84341f03b0e359. * manually ordered * Apply suggestions from code review --------- Co-authored-by:Aryan <contact.aryanvs@gmail.com>
-
Adrien B authored
Correction typo
-
- 11 Apr, 2025 1 commit
-
-
hlky authored
* HiDream Image --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 09 Apr, 2025 3 commits
-
-
hlky authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
minor updates to dtype map docs.
-
Sayak Paul authored
-
- 08 Apr, 2025 3 commits
-
-
Sayak Paul authored
* implement record_stream for better performance. * fix * style. * merge #11097 * Update src/diffusers/hooks/group_offloading.py Co-authored-by:
Aryan <aryan@huggingface.co> * fixes * docstring. * remaining todos in low_cpu_mem_usage * tests * updates to docs. --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
Benjamin Bossan authored
* [WIP][LoRA] Implement hot-swapping of LoRA This PR adds the possibility to hot-swap LoRA adapters. It is WIP. Description As of now, users can already load multiple LoRA adapters. They can offload existing adapters or they can unload them (i.e. delete them). However, they cannot "hotswap" adapters yet, i.e. substitute the weights from one LoRA adapter with the weights of another, without the need to create a separate LoRA adapter. Generally, hot-swapping may not appear not super useful but when the model is compiled, it is necessary to prevent recompilation. See #9279 for more context. Caveats To hot-swap a LoRA adapter for another, these two adapters should target exactly the same layers and the "hyper-parameters" of the two adapters should be identical. For instance, the LoRA alpha has to be the same: Given that we keep the alpha from the first adapter, the LoRA scaling would be incorrect for the second adapter otherwise. Theoretically, we could override the scaling dict with the alpha values derived from the second adapter's config, but changing the dict will trigger a guard for recompilation, defeating the main purpose of the feature. I also found that compilation flags can have an impact on whether this works or not. E.g. when passing "reduce-overhead", there will be errors of the type: > input name: arg861_1. data pointer changed from 139647332027392 to 139647331054592 I don't know enough about compilation to determine whether this is problematic or not. Current state This is obviously WIP right now to collect feedback and discuss which direction to take this. If this PR turns out to be useful, the hot-swapping functions will be added to PEFT itself and can be imported here (or there is a separate copy in diffusers to avoid the need for a min PEFT version to use this feature). Moreover, more tests need to be added to better cover this feature, although we don't necessarily need tests for the hot-swapping functionality itself, since those tests will be added to PEFT. Furthermore, as of now, this is only implemented for the unet. Other pipeline components have yet to implement this feature. Finally, it should be properly documented. I would like to collect feedback on the current state of the PR before putting more time into finalizing it. * Reviewer feedback * Reviewer feedback, adjust test * Fix, doc * Make fix * Fix for possible g++ error * Add test for recompilation w/o hotswapping * Make hotswap work Requires https://github.com/huggingface/peft/pull/2366 More changes to make hotswapping work. Together with the mentioned PEFT PR, the tests pass for me locally. List of changes: - docstring for hotswap - remove code copied from PEFT, import from PEFT now - adjustments to PeftAdapterMixin.load_lora_adapter (unfortunately, some state dict renaming was necessary, LMK if there is a better solution) - adjustments to UNet2DConditionLoadersMixin._process_lora: LMK if this is even necessary or not, I'm unsure what the overall relationship is between this and PeftAdapterMixin.load_lora_adapter - also in UNet2DConditionLoadersMixin._process_lora, I saw that there is no LoRA unloading when loading the adapter fails, so I added it there (in line with what happens in PeftAdapterMixin.load_lora_adapter) - rewritten tests to avoid shelling out, make the test more precise by making sure that the outputs align, parametrize it - also checked the pipeline code mentioned in this comment: https://github.com/huggingface/diffusers/pull/9453#issuecomment-2418508871; when running this inside the with torch._dynamo.config.patch(error_on_recompile=True) context, there is no error, so I think hotswapping is now working with pipelines. * Address reviewer feedback: - Revert deprecated method - Fix PEFT doc link to main - Don't use private function - Clarify magic numbers - Add pipeline test Moreover: - Extend docstrings - Extend existing test for outputs != 0 - Extend existing test for wrong adapter name * Change order of test decorators parameterized.expand seems to ignore skip decorators if added in last place (i.e. innermost decorator). * Split model and pipeline tests Also increase test coverage by also targeting conv2d layers (support of which was added recently on the PEFT PR). * Reviewer feedback: Move decorator to test classes ... instead of having them on each test method. * Apply suggestions from code review Co-authored-by:
hlky <hlky@hlky.ac> * Reviewer feedback: version check, TODO comment * Add enable_lora_hotswap method * Reviewer feedback: check _lora_loadable_modules * Revert changes in unet.py * Add possibility to ignore enabled at wrong time * Fix docstrings * Log possible PEFT error, test * Raise helpful error if hotswap not supported I.e. for the text encoder * Formatting * More linter * More ruff * Doc-builder complaint * Update docstring: - mention no text encoder support yet - make it clear that LoRA is meant - mention that same adapter name should be passed * Fix error in docstring * Update more methods with hotswap argument - SDXL - SD3 - Flux No changes were made to load_lora_into_transformer. * Add hotswap argument to load_lora_into_transformer For SD3 and Flux. Use shorter docstring for brevity. * Extend docstrings * Add version guards to tests * Formatting * Fix LoRA loading call to add prefix=None See: https://github.com/huggingface/diffusers/pull/10187#issuecomment-2717571064 * Run make fix-copies * Add hot swap documentation to the docs * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Steven Liu authored
mps
-
- 04 Apr, 2025 1 commit
-
-
Tolga Cangöz authored
* Refactor `LTXConditionPipeline` to add text-only conditioning * style * up * Refactor `LTXConditionPipeline` to streamline condition handling and improve clarity * Improve condition checks * Simplify latents handling based on conditioning type * Refactor rope_interpolation_scale preparation for clarity and efficiency * Update LTXConditionPipeline docstring to clarify supported input types * Add LTX Video 0.9.5 model to documentation * Clarify documentation to indicate support for text-only conditioning without passing `conditions` * refactor: comment out unused parameters in LTXConditionPipeline * fix: restore previously commented parameters in LTXConditionPipeline * fix: remove unused parameters from LTXConditionPipeline * refactor: remove unnecessary lines in LTXConditionPipeline
-
- 02 Apr, 2025 1 commit
-
-
hlky authored
-
- 01 Apr, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update
-
- 31 Mar, 2025 1 commit
-
-
Mark authored
-
- 28 Mar, 2025 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 24 Mar, 2025 3 commits
-
-
Aryan authored
* update * Update docs/source/en/optimization/memory.md * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Jun Yeop Na authored
* remove typo from korean controlnet train doc * removed more paragraphs to remain in sync with the english document
-
Aryan authored
* update * update * update * add tests * update docs * raise value error * warning for true cfg and guidance scale * fix test
-
- 21 Mar, 2025 2 commits
-
-
YiYi Xu authored
* add sana-sprint --------- Co-authored-by:
Junsong Chen <cjs1020440147@icloud.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Aryan authored
* init * update * update * update * make style * update * fix * make it work with guidance distilled models * update * make fix-copies * add tests * update * apply_faster_cache -> apply_fastercache * fix * reorder * update * refactor * update docs * add fastercache to CacheMixin * update tests * Apply suggestions from code review * make style * try to fix partial import error * Apply style fixes * raise warning * update --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 18 Mar, 2025 1 commit
-
-
Aryan authored
* update --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 13 Mar, 2025 1 commit
-
-
hlky authored
* Rename Lumina(2)Text2ImgPipeline -> Lumina(2)Pipeline --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 12 Mar, 2025 1 commit
-
-
hlky authored
* [hybrid inference
🍯 🐝 ] Add VAE encode * _toctree: add vae encode * Add endpoints, tests * vae_encode docs * vae encode benchmarks * api reference * changelog * Update docs/source/en/hybrid_inference/overview.md Co-authored-by:Sayak Paul <spsayakpaul@gmail.com> * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 11 Mar, 2025 2 commits
-
-
Sayak Paul authored
* support wan i2v loras from the world. * remove copied from. * upates * add lora.
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update
-
- 10 Mar, 2025 1 commit
-
-
Dhruv Nair authored
* update * updaet * update * update * update * update * update * update * update * update * update * update * Update docs/source/en/quantization/quanto.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * Update src/diffusers/quantizers/quanto/utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 07 Mar, 2025 2 commits
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update
-
Aryan authored
* update * update * update * add tests * update * add model tests * update docs * update * update example * fix defaults * update
-
- 04 Mar, 2025 1 commit
-
-
Sayak Paul authored
* Update evaluation.md * Update docs/source/en/conceptual/evaluation.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 03 Mar, 2025 2 commits
-
-
Parag Ekbote authored
* Add example of Ip-Adapter-Callback. * Add image links from HF Hub.
-
Bubbliiiing authored
* Update EasyAnimate V5.1 * Add docs && add tests && Fix comments problems in transformer3d and vae * delete comments and remove useless import * delete process * Update EXAMPLE_DOC_STRING * rename transformer file * make fix-copies * make style * refactor pt. 1 * update toctree.yml * add model tests * Update layer_norm for norm_added_q and norm_added_k in Attention * Fix processor problem * refactor vae * Fix problem in comments * refactor tiling; remove einops dependency * fix docs path * make fix-copies * Update src/diffusers/pipelines/easyanimate/pipeline_easyanimate_control.py * update _toctree.yml * fix test * update * update * update * make fix-copies * fix tests --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 02 Mar, 2025 2 commits
-
-
hlky authored
* Add `remote_decode` to `remote_utils` * test dependency * test dependency * dependency * dependency * dependency * docstrings * changes * make style * apply * revert, add new options * Apply style fixes * deprecate base64, headers not needed * address comments * add license header * init test_remote_decode * more * more test * more test * skeleton for xl, flux * more test * flux test * flux packed * no scaling * -save * hunyuanvideo test * Apply style fixes * init docs * Update src/diffusers/utils/remote_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * comments * Apply style fixes * comments * hybrid_inference/vae_decode * fix * tip? * tip * api reference autodoc * install tip --------- Co-authored-by:
sayakpaul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
YiYi Xu authored
* Add wanx pipeline, model and example * wanx_merged_v1 * change WanX into Wan * fix i2v fp32 oom error Link: https://code.alibaba-inc.com/open_wanx2/diffusers/codereview/20607813 * support t2v load fp32 ckpt * add example * final merge v1 * Update autoencoder_kl_wan.py * up * update middle, test up_block * up up * one less nn.sequential * up more * up * more * [refactor] [wip] Wan transformer/pipeline (#10926) * update * update * refactor rope * refactor pipeline * make fix-copies * add transformer test * update * update * make style * update tests * tests * conversion script * conversion script * update * docs * remove unused code * fix _toctree.yml * update dtype * fix test * fix tests: scale * up * more * Apply suggestions from code review * Apply suggestions from code review * style * Update scripts/convert_wan_to_diffusers.py * update docs * fix --------- Co-authored-by:
Yitong Huang <huangyitong.hyt@alibaba-inc.com> Co-authored-by:
亚森 <wangjiayu.wjy@alibaba-inc.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 26 Feb, 2025 1 commit
-
-
Anton Obukhov authored
* minor documentation fixes of the depth and normals pipelines * update license headers * update model checkpoints in examples fix missing prediction_type in register_to_config in the normals pipeline * add initial marigold intrinsics pipeline update comments about num_inference_steps and ensemble_size minor fixes in comments of marigold normals and depth pipelines * update uncertainty visualization to work with intrinsics * integrate iid --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 24 Feb, 2025 4 commits
-
-
Dhruv Nair authored
update
-
Aryan authored
update
-
Steven Liu authored
* flux group-offload * feedback
-
Steven Liu authored
* sd_embed * feedback
-
- 22 Feb, 2025 1 commit
-
-
Steven Liu authored
* lora * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Feb, 2025 1 commit
-
-
SahilCarterr authored
Fix docs
-