- 15 Jul, 2025 1 commit
-
-
Aryan authored
* update * make style
-
- 11 Jul, 2025 1 commit
-
-
Aryan authored
* update * update * update * pin accelerate version * add comment explanations * update docstring * make style * non_blocking does not matter for dtype cast * _empty_cache -> clear_cache * update * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> * Update src/diffusers/models/model_loading_utils.py --------- Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com>
-
- 10 Jul, 2025 2 commits
-
-
Sayak Paul authored
fix: disabling hooks when loading loras.
-
YiYi Xu authored
adding modular diffusers as experimental feature --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 04 Jul, 2025 2 commits
-
-
Aryan authored
* fix * actually, better fix * empty commit; trigger tests again * mark wanvace test as flaky
-
Benjamin Bossan authored
* FIX set_lora_device when target layers differ Resolves #11833 Fixes a bug that occurs after calling set_lora_device when multiple LoRA adapters are loaded that target different layers. Note: Technically, the accompanying test does not require a GPU because the bug is triggered even if the parameters are already on the corresponding device, i.e. loading on CPU and then changing the device to CPU is sufficient to cause the bug. However, this may be optimized away in the future, so I decided to test with GPU. * Update docstring to warn about device mismatch * Extend docstring with an example * Fix docstring --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Jul, 2025 2 commits
-
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * fix import * fix prefix * remove print * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Ju Hoon Park authored
* add `WandVACETransformer3DModel` in`SINGLE_FILE_LOADABLE_CLASSES` * add rename keys for `VACE` add rename keys for `VACE` * fix typo Sincere thanks to @nitinmukesh
🙇 ♂️ * support for `1.3B VACE` model Sincere thanks to @nitinmukesh again🙇 ♂️ * update * update * Apply style fixes --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 01 Jul, 2025 1 commit
-
-
Aryan authored
* update * update * update docs
-
- 30 Jun, 2025 2 commits
-
-
Benjamin Bossan authored
* ENH Improve speed of expanding LoRA scales Resolves #11816 The following call proved to be a bottleneck when setting a lot of LoRA adapters in diffusers: https://github.com/huggingface/diffusers/blob/cdaf84a708eadf17d731657f4be3fa39d09a12c0/src/diffusers/loaders/peft.py#L482 This is because we would repeatedly call unet.state_dict(), even though in the standard case, it is not necessary: https://github.com/huggingface/diffusers/blob/cdaf84a708eadf17d731657f4be3fa39d09a12c0/src/diffusers/loaders/unet_loader_utils.py#L55 This PR fixes this by deferring this call, so that it is only run when it's necessary, not earlier. * Small fix --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* feat: use exclude modules to loraconfig. * version-guard. * tests and version guard. * remove print. * describe the test * more detailed warning message + shift to debug * update * update * update * remove test
-
- 28 Jun, 2025 1 commit
-
-
Sayak Paul authored
* fix: lora unloading behvaiour * fix * update
-
- 27 Jun, 2025 1 commit
-
-
Aryan authored
* update * add test * address review comments * update * fixes * change decorator order to fix tests * try fix * fight tests
-
- 25 Jun, 2025 1 commit
-
-
Sayak Paul authored
-
- 24 Jun, 2025 2 commits
-
-
YiYi Xu authored
up
-
Sayak Paul authored
* minor cleanups in the lora docs. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * format docs * fix copies --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 19 Jun, 2025 2 commits
-
-
Sayak Paul authored
* factor out stuff from load_lora_adapter(). * simplifying text encoder lora loading. * fix peft.py * fix logging locations. * formatting * fix * update * update * update
-
Aryan authored
update
-
- 17 Jun, 2025 1 commit
-
-
Aryan authored
update
-
- 16 Jun, 2025 1 commit
-
-
Sayak Paul authored
* fix flux lora loader when return_metadata is true for non-diffusers * remove annotation
-
- 14 Jun, 2025 1 commit
-
-
Edna authored
* working state from hameerabbasi and iddl * working state form hameerabbasi and iddl (transformer) * working state (normalization) * working state (embeddings) * add chroma loader * add chroma to mappings * add chroma to transformer init * take out variant stuff * get decently far in changing variant stuff * add chroma init * make chroma output class * add chroma transformer to dummy tp * add chroma to init * add chroma to init * fix single file * update * update * add chroma to auto pipeline * add chroma to pipeline init * change to chroma transformer * take out variant from blocks * swap embedder location * remove prompt_2 * work on swapping text encoders * remove mask function * dont modify mask (for now) * wrap attn mask * no attn mask (can't get it to work) * remove pooled prompt embeds * change to my own unpooled embeddeer * fix load * take pooled projections out of transformer * ensure correct dtype for chroma embeddings * update * use dn6 attn mask + fix true_cfg_scale * use chroma pipeline output * use DN6 embeddings * remove guidance * remove guidance embed (pipeline) * remove guidance from embeddings * don't return length * dont change dtype * remove unused stuff, fix up docs * add chroma autodoc * add .md (oops) * initial chroma docs * undo don't change dtype * undo arxiv change unsure why that happened * fix hf papers regression in more places * Update docs/source/en/api/pipelines/chroma.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * do_cfg -> self.do_classifier_free_guidance * Update docs/source/en/api/models/chroma_transformer.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update chroma.md * Move chroma layers into transformer * Remove pruned AdaLayerNorms * Add chroma fast tests * (untested) batch cond and uncond * Add # Copied from for shift * Update # Copied from statements * update norm imports * Revert cond + uncond batching * Add transformer tests * move chroma test (oops) * chroma init * fix chroma pipeline fast tests * Update src/diffusers/models/transformers/transformer_chroma.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Move Approximator and Embeddings * Fix auto pipeline + make style, quality * make style * Apply style fixes * switch to new input ids * fix # Copied from error * remove # Copied from on protected members * try to fix import * fix import * make fix-copes * revert style fix * update chroma transformer params * update chroma transformer approximator init params * update to pad tokens * fix batch inference * Make more pipeline tests work * Make most transformer tests work * fix docs * make style, make quality * skip batch tests * fix test skipping * fix test skipping again * fix for tests * Fix all pipeline test * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * Revert "fix equal size list input" This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c. * update * update * update * update * update --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 13 Jun, 2025 2 commits
-
-
Sayak Paul authored
* feat: parse metadata from lora state dicts. * tests * fix tests * key renaming * fix * smol update * smol updates * load metadata. * automatically save metadata in save_lora_adapter. * propagate changes. * changes * add test to models too. * tigher tests. * updates * fixes * rename tests. * sorted. * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * review suggestions. * removeprefix. * propagate changes. * fix-copies * sd * docs. * fixes * get review ready. * one more test to catch error. * change to a different approach. * fix-copies. * todo * sd3 * update * revert changes in get_peft_kwargs. * update * fixes * fixes * simplify _load_sft_state_dict_metadata * update * style fix * uipdate * update * update * empty commit * _pack_dict_with_prefix * update * TODO 1. * todo: 2. * todo: 3. * update * update * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * reraise. * move argument. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Aryan authored
* update * make style * Update src/diffusers/loaders/lora_conversion_utils.py * add note explaining threshold
-
- 11 Jun, 2025 1 commit
-
-
Sayak Paul authored
support Flux Control LoRA with bnb 8bit.
-
- 06 Jun, 2025 1 commit
-
-
Aryan authored
* initial support * make fix-copies * fix no split modules * add conversion script * refactor * add pipeline test * refactor * fix bug with mask * fix for reference images * remove print * update docs * update slices * update * update * update example
-
- 30 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in strings and comments Signed-off-by:
co63oc <co63oc@users.noreply.github.com> * Update src/diffusers/hooks/hooks.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/hooks/hooks.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update layerwise_casting.py * Apply style fixes * update --------- Signed-off-by:
co63oc <co63oc@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 27 May, 2025 1 commit
-
-
Sayak Paul authored
* improve lora fusion tests * more improvements. * remove comment * update * relax tolerance. * num_fused_loras as a property Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> * updates * update * fix * fix Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> --------- Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-
- 22 May, 2025 1 commit
-
-
Sayak Paul authored
* fix peft delete adapters for flux. * add test * empty commit
-
- 20 May, 2025 1 commit
-
-
Linoy Tsaban authored
* testing * testing * testing * testing * testing * i2v * i2v * device fix * testing * fix * fix * fix * fix * fix * Apply style fixes * empty commit --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 19 May, 2025 4 commits
-
-
Dhruv Nair authored
update
-
Sayak Paul authored
* start supporting kijai wan lora. * diff_b keys. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * merge ready --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
Linoy Tsaban authored
* support non diffusers loras for ltxv * Update src/diffusers/loaders/lora_conversion_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/loaders/lora_pipeline.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Apply style fixes * empty commit --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
apolinário authored
-
- 15 May, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update
-
- 13 May, 2025 2 commits
-
-
Linoy Tsaban authored
init
-
johannaSommer authored
Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 09 May, 2025 1 commit
-
-
Sayak Paul authored
* support non-diffusers hidream loras * make fix-copies
-
- 06 May, 2025 2 commits
-
-
Valeriy Selitskiy authored
[lora_conversion] Enhance key handling for OneTrainer components in LORA conversion utility (#11441) (#11487) * [lora_conversion] Enhance key handling for OneTrainer components in LORA conversion utility (#11441) * Update src/diffusers/loaders/lora_conversion_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* use removeprefix to preserve sanity. * f-string.
-
- 01 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in docs and comments * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-