- 02 Jul, 2025 2 commits
-
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * fix import * fix prefix * remove print * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Ju Hoon Park authored
* add `WandVACETransformer3DModel` in`SINGLE_FILE_LOADABLE_CLASSES` * add rename keys for `VACE` add rename keys for `VACE` * fix typo Sincere thanks to @nitinmukesh
🙇 ♂️ * support for `1.3B VACE` model Sincere thanks to @nitinmukesh again🙇 ♂️ * update * update * Apply style fixes --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 01 Jul, 2025 1 commit
-
-
Aryan authored
* update * update * update docs
-
- 30 Jun, 2025 2 commits
-
-
Benjamin Bossan authored
* ENH Improve speed of expanding LoRA scales Resolves #11816 The following call proved to be a bottleneck when setting a lot of LoRA adapters in diffusers: https://github.com/huggingface/diffusers/blob/cdaf84a708eadf17d731657f4be3fa39d09a12c0/src/diffusers/loaders/peft.py#L482 This is because we would repeatedly call unet.state_dict(), even though in the standard case, it is not necessary: https://github.com/huggingface/diffusers/blob/cdaf84a708eadf17d731657f4be3fa39d09a12c0/src/diffusers/loaders/unet_loader_utils.py#L55 This PR fixes this by deferring this call, so that it is only run when it's necessary, not earlier. * Small fix --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* feat: use exclude modules to loraconfig. * version-guard. * tests and version guard. * remove print. * describe the test * more detailed warning message + shift to debug * update * update * update * remove test
-
- 28 Jun, 2025 1 commit
-
-
Sayak Paul authored
* fix: lora unloading behvaiour * fix * update
-
- 27 Jun, 2025 1 commit
-
-
Aryan authored
* update * add test * address review comments * update * fixes * change decorator order to fix tests * try fix * fight tests
-
- 25 Jun, 2025 1 commit
-
-
Sayak Paul authored
-
- 24 Jun, 2025 2 commits
-
-
YiYi Xu authored
up
-
Sayak Paul authored
* minor cleanups in the lora docs. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * format docs * fix copies --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 19 Jun, 2025 2 commits
-
-
Sayak Paul authored
* factor out stuff from load_lora_adapter(). * simplifying text encoder lora loading. * fix peft.py * fix logging locations. * formatting * fix * update * update * update
-
Aryan authored
update
-
- 17 Jun, 2025 1 commit
-
-
Aryan authored
update
-
- 16 Jun, 2025 1 commit
-
-
Sayak Paul authored
* fix flux lora loader when return_metadata is true for non-diffusers * remove annotation
-
- 14 Jun, 2025 1 commit
-
-
Edna authored
* working state from hameerabbasi and iddl * working state form hameerabbasi and iddl (transformer) * working state (normalization) * working state (embeddings) * add chroma loader * add chroma to mappings * add chroma to transformer init * take out variant stuff * get decently far in changing variant stuff * add chroma init * make chroma output class * add chroma transformer to dummy tp * add chroma to init * add chroma to init * fix single file * update * update * add chroma to auto pipeline * add chroma to pipeline init * change to chroma transformer * take out variant from blocks * swap embedder location * remove prompt_2 * work on swapping text encoders * remove mask function * dont modify mask (for now) * wrap attn mask * no attn mask (can't get it to work) * remove pooled prompt embeds * change to my own unpooled embeddeer * fix load * take pooled projections out of transformer * ensure correct dtype for chroma embeddings * update * use dn6 attn mask + fix true_cfg_scale * use chroma pipeline output * use DN6 embeddings * remove guidance * remove guidance embed (pipeline) * remove guidance from embeddings * don't return length * dont change dtype * remove unused stuff, fix up docs * add chroma autodoc * add .md (oops) * initial chroma docs * undo don't change dtype * undo arxiv change unsure why that happened * fix hf papers regression in more places * Update docs/source/en/api/pipelines/chroma.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * do_cfg -> self.do_classifier_free_guidance * Update docs/source/en/api/models/chroma_transformer.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update chroma.md * Move chroma layers into transformer * Remove pruned AdaLayerNorms * Add chroma fast tests * (untested) batch cond and uncond * Add # Copied from for shift * Update # Copied from statements * update norm imports * Revert cond + uncond batching * Add transformer tests * move chroma test (oops) * chroma init * fix chroma pipeline fast tests * Update src/diffusers/models/transformers/transformer_chroma.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Move Approximator and Embeddings * Fix auto pipeline + make style, quality * make style * Apply style fixes * switch to new input ids * fix # Copied from error * remove # Copied from on protected members * try to fix import * fix import * make fix-copes * revert style fix * update chroma transformer params * update chroma transformer approximator init params * update to pad tokens * fix batch inference * Make more pipeline tests work * Make most transformer tests work * fix docs * make style, make quality * skip batch tests * fix test skipping * fix test skipping again * fix for tests * Fix all pipeline test * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * Revert "fix equal size list input" This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c. * update * update * update * update * update --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 13 Jun, 2025 2 commits
-
-
Sayak Paul authored
* feat: parse metadata from lora state dicts. * tests * fix tests * key renaming * fix * smol update * smol updates * load metadata. * automatically save metadata in save_lora_adapter. * propagate changes. * changes * add test to models too. * tigher tests. * updates * fixes * rename tests. * sorted. * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * review suggestions. * removeprefix. * propagate changes. * fix-copies * sd * docs. * fixes * get review ready. * one more test to catch error. * change to a different approach. * fix-copies. * todo * sd3 * update * revert changes in get_peft_kwargs. * update * fixes * fixes * simplify _load_sft_state_dict_metadata * update * style fix * uipdate * update * update * empty commit * _pack_dict_with_prefix * update * TODO 1. * todo: 2. * todo: 3. * update * update * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * reraise. * move argument. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Aryan authored
* update * make style * Update src/diffusers/loaders/lora_conversion_utils.py * add note explaining threshold
-
- 11 Jun, 2025 1 commit
-
-
Sayak Paul authored
support Flux Control LoRA with bnb 8bit.
-
- 06 Jun, 2025 1 commit
-
-
Aryan authored
* initial support * make fix-copies * fix no split modules * add conversion script * refactor * add pipeline test * refactor * fix bug with mask * fix for reference images * remove print * update docs * update slices * update * update * update example
-
- 30 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in strings and comments Signed-off-by:
co63oc <co63oc@users.noreply.github.com> * Update src/diffusers/hooks/hooks.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/hooks/hooks.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update layerwise_casting.py * Apply style fixes * update --------- Signed-off-by:
co63oc <co63oc@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 27 May, 2025 1 commit
-
-
Sayak Paul authored
* improve lora fusion tests * more improvements. * remove comment * update * relax tolerance. * num_fused_loras as a property Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> * updates * update * fix * fix Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> --------- Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-
- 22 May, 2025 1 commit
-
-
Sayak Paul authored
* fix peft delete adapters for flux. * add test * empty commit
-
- 20 May, 2025 1 commit
-
-
Linoy Tsaban authored
* testing * testing * testing * testing * testing * i2v * i2v * device fix * testing * fix * fix * fix * fix * fix * Apply style fixes * empty commit --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 19 May, 2025 4 commits
-
-
Dhruv Nair authored
update
-
Sayak Paul authored
* start supporting kijai wan lora. * diff_b keys. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * merge ready --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
Linoy Tsaban authored
* support non diffusers loras for ltxv * Update src/diffusers/loaders/lora_conversion_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/loaders/lora_pipeline.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Apply style fixes * empty commit --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
apolinário authored
-
- 15 May, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update
-
- 13 May, 2025 2 commits
-
-
Linoy Tsaban authored
init
-
johannaSommer authored
Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 09 May, 2025 1 commit
-
-
Sayak Paul authored
* support non-diffusers hidream loras * make fix-copies
-
- 06 May, 2025 2 commits
-
-
Valeriy Selitskiy authored
[lora_conversion] Enhance key handling for OneTrainer components in LORA conversion utility (#11441) (#11487) * [lora_conversion] Enhance key handling for OneTrainer components in LORA conversion utility (#11441) * Update src/diffusers/loaders/lora_conversion_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* use removeprefix to preserve sanity. * f-string.
-
- 01 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in docs and comments * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 28 Apr, 2025 1 commit
-
-
Yao Matrix authored
* enable gguf test cases on XPU Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * make SD35LargeGGUFSingleFileTests::test_pipeline_inference pas Signed-off-by:
root <root@a4bf01945cfe.jf.intel.com> * make FluxControlLoRAGGUFTests::test_lora_loading pass Signed-off-by:
Yao Matrix <matrix.yao@intel.com> * polish code Signed-off-by:
Yao Matrix <matrix.yao@intel.com> * Apply style fixes --------- Signed-off-by:
YAO Matrix <matrix.yao@intel.com> Signed-off-by:
root <root@a4bf01945cfe.jf.intel.com> Signed-off-by:
Yao Matrix <matrix.yao@intel.com> Co-authored-by:
root <root@a4bf01945cfe.jf.intel.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 23 Apr, 2025 1 commit
-
-
Teriks authored
* Kolors additional pipelines, community contrib --------- Co-authored-by:
Teriks <Teriks@users.noreply.github.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 22 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * initial commit * initial commit * initial commit * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> * move prompt embeds, pooled embeds outside * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * fix import * fix import and tokenizer 4, text encoder 4 loading * te * prompt embeds * fix naming * shapes * initial commit to add HiDreamImageLoraLoaderMixin * fix init * add tests * loader * fix model input * add code example to readme * fix default max length of text encoders * prints * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training * smol fix * unpatchify * unpatchify * fix validation * flip pred and loss * fix shift!!! * revert unpatchify changes (for now) * smol fix * Apply style fixes * workaround moe training * workaround moe training * remove prints * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae) https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207 * refactor to align with HiDream refactor * refactor to align with HiDream refactor * refactor to align with HiDream refactor * add support for cpu offloading of text encoders * Apply style fixes * adjust lr and rank for train example * fix copies * Apply style fixes * update README * update README * update README * fix license * keep prompt2,3,4 as None in validation * remove reverse ode comment * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * vae offload change * fix text encoder offloading * Apply style fixes * cleaner to_kwargs * fix module name in copied from * add requirements * fix offloading * fix offloading * fix offloading * update transformers version in reqs * try AutoTokenizer * try AutoTokenizer * Apply style fixes * empty commit * Delete tests/lora/test_lora_layers_hidream.py * change tokenizer_4 to load with AutoTokenizer as well * make text_encoder_four and tokenizer_four configurable * save model card * save model card * revert T5 * fix test * remove non diffusers lumina2 conversion --------- Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 17 Apr, 2025 1 commit
-
-
Sayak Paul authored
* propagate hotswap to other load_lora_weights() methods. * simplify documentations. * updates * propagate to load_lora_into_text_encoder. * empty commit
-
- 16 Apr, 2025 2 commits
-
-
nPeppon authored
-
Sayak Paul authored
* enable telemetry for single file loading when using GGUF. * quality
-