- 01 Jul, 2025 2 commits
-
-
Mikko Tukiainen authored
* use real instead of complex tensors in Wan2.1 RoPE * remove the redundant type conversion * unpack rotary_emb * register rotary embedding frequencies as non-persistent buffers * Apply style fixes --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Aryan authored
* update * update * update docs
-
- 30 Jun, 2025 3 commits
-
-
Aryan authored
remove print
-
Benjamin Bossan authored
* ENH Improve speed of expanding LoRA scales Resolves #11816 The following call proved to be a bottleneck when setting a lot of LoRA adapters in diffusers: https://github.com/huggingface/diffusers/blob/cdaf84a708eadf17d731657f4be3fa39d09a12c0/src/diffusers/loaders/peft.py#L482 This is because we would repeatedly call unet.state_dict(), even though in the standard case, it is not necessary: https://github.com/huggingface/diffusers/blob/cdaf84a708eadf17d731657f4be3fa39d09a12c0/src/diffusers/loaders/unet_loader_utils.py#L55 This PR fixes this by deferring this call, so that it is only run when it's necessary, not earlier. * Small fix --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* feat: use exclude modules to loraconfig. * version-guard. * tests and version guard. * remove print. * describe the test * more detailed warning message + shift to debug * update * update * update * remove test
-
- 28 Jun, 2025 1 commit
-
-
Sayak Paul authored
* fix: lora unloading behvaiour * fix * update
-
- 27 Jun, 2025 2 commits
-
-
Aryan authored
* update * add test * address review comments * update * fixes * change decorator order to fix tests * try fix * fight tests
-
Sayak Paul authored
-
- 26 Jun, 2025 6 commits
-
-
Aryan authored
fix
-
Sayak Paul authored
* support flux kontext * make fix-copies * add example * add tests * update docs * update * add note on integrity checker * initial commit * initial commit * add readme section and fixes in the training script. * add test * rectify ckpt_id * fix ckpt * fixes * change id * update * Update examples/dreambooth/train_dreambooth_lora_flux_kontext.py Co-authored-by:
Aryan <aryan@huggingface.co> * Update examples/dreambooth/README_flux.md --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
linoytsaban <linoy@huggingface.co> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Aryan authored
* support flux kontext * make fix-copies * add example * add tests * update docs * update * add note on integrity checker * make fix-copies issue * add copied froms * make style * update repository ids * more copied froms
-
Animesh Jain authored
* [rfc][compile] compile method for DiffusionPipeline * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Apply style fixes * Update docs/source/en/optimization/fp16.md * check --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Dhruv Nair authored
* update * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* post release v0.34.0 * code quality --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 25 Jun, 2025 1 commit
-
-
Sayak Paul authored
-
- 24 Jun, 2025 5 commits
-
-
Sayak Paul authored
-
Aryan authored
* update * update * update
-
Sayak Paul authored
* raise as early as possible in group offloading * remove check from ModuleGroup
-
YiYi Xu authored
up
-
Sayak Paul authored
* minor cleanups in the lora docs. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * format docs * fix copies --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 23 Jun, 2025 2 commits
-
-
Yuanchen Guo authored
-
Yao Matrix authored
enable cpu offloading of new pipelines on XPU & use device agnostic empty to make pipelines work on XPU (#11671) * commit 1 Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * patch 2 Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * Update pipeline_pag_sana.py * Update pipeline_sana.py * Update pipeline_sana_controlnet.py * Update pipeline_sana_sprint_img2img.py * Update pipeline_sana_sprint.py * fix style Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * fix fat-thumb while merge conflict Signed-off-by:
YAO Matrix <matrix.yao@intel.com> * fix ci issues Signed-off-by:
YAO Matrix <matrix.yao@intel.com> --------- Signed-off-by:
YAO Matrix <matrix.yao@intel.com> Co-authored-by:
Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
-
- 21 Jun, 2025 1 commit
-
-
Tolga Cangöz authored
Fix dimensionality in `apply_rotary_emb` functions' comments.
-
- 20 Jun, 2025 2 commits
-
-
Dhruv Nair authored
update
-
Sayak Paul authored
* start * updates
-
- 19 Jun, 2025 6 commits
-
-
Dhruv Nair authored
* update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* start implementing disk offloading in group. * delete diff file. * updates.patch * offload_to_disk_path * check if safetensors already exist. * add test and clarify. * updates * update todos. * update more docs. * update docs
-
Sayak Paul authored
* factor out stuff from load_lora_adapter(). * simplifying text encoder lora loading. * fix peft.py * fix logging locations. * formatting * fix * update * update * update
-
Aryan authored
update
-
Aryan authored
update
-
Sayak Paul authored
add is_compileable property to quantizers.
-
- 18 Jun, 2025 3 commits
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * updte * update * update * update
-
Sayak Paul authored
change to 2025 licensing for remaining
-
Saurabh Misra authored
*
⚡ ️ Speed up method `AutoencoderKLWan.clear_cache` by 886% **Key optimizations:** - Compute the number of `WanCausalConv3d` modules in each model (`encoder`/`decoder`) **only once during initialization**, store in `self._cached_conv_counts`. This removes unnecessary repeated tree traversals at every `clear_cache` call, which was the main bottleneck (from profiling). - The internal helper `_count_conv3d_fast` is optimized via a generator expression with `sum` for efficiency. All comments from the original code are preserved, except for updated or removed local docstrings/comments relevant to changed lines. **Function signatures and outputs remain unchanged.** * Apply style fixes * Apply suggestions from code review Co-authored-by:Aryan <contact.aryanvs@gmail.com> * Apply style fixes --------- Co-authored-by:
codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aseem Saxena <aseem.bits@gmail.com>
-
- 17 Jun, 2025 1 commit
-
-
Aryan authored
update
-
- 16 Jun, 2025 2 commits
-
-
Sayak Paul authored
* show how metadata stuff should be incorporated in training scripts. * typing * fix --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Sayak Paul authored
* fix flux lora loader when return_metadata is true for non-diffusers * remove annotation
-
- 14 Jun, 2025 1 commit
-
-
Edna authored
* working state from hameerabbasi and iddl * working state form hameerabbasi and iddl (transformer) * working state (normalization) * working state (embeddings) * add chroma loader * add chroma to mappings * add chroma to transformer init * take out variant stuff * get decently far in changing variant stuff * add chroma init * make chroma output class * add chroma transformer to dummy tp * add chroma to init * add chroma to init * fix single file * update * update * add chroma to auto pipeline * add chroma to pipeline init * change to chroma transformer * take out variant from blocks * swap embedder location * remove prompt_2 * work on swapping text encoders * remove mask function * dont modify mask (for now) * wrap attn mask * no attn mask (can't get it to work) * remove pooled prompt embeds * change to my own unpooled embeddeer * fix load * take pooled projections out of transformer * ensure correct dtype for chroma embeddings * update * use dn6 attn mask + fix true_cfg_scale * use chroma pipeline output * use DN6 embeddings * remove guidance * remove guidance embed (pipeline) * remove guidance from embeddings * don't return length * dont change dtype * remove unused stuff, fix up docs * add chroma autodoc * add .md (oops) * initial chroma docs * undo don't change dtype * undo arxiv change unsure why that happened * fix hf papers regression in more places * Update docs/source/en/api/pipelines/chroma.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * do_cfg -> self.do_classifier_free_guidance * Update docs/source/en/api/models/chroma_transformer.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update chroma.md * Move chroma layers into transformer * Remove pruned AdaLayerNorms * Add chroma fast tests * (untested) batch cond and uncond * Add # Copied from for shift * Update # Copied from statements * update norm imports * Revert cond + uncond batching * Add transformer tests * move chroma test (oops) * chroma init * fix chroma pipeline fast tests * Update src/diffusers/models/transformers/transformer_chroma.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Move Approximator and Embeddings * Fix auto pipeline + make style, quality * make style * Apply style fixes * switch to new input ids * fix # Copied from error * remove # Copied from on protected members * try to fix import * fix import * make fix-copes * revert style fix * update chroma transformer params * update chroma transformer approximator init params * update to pad tokens * fix batch inference * Make more pipeline tests work * Make most transformer tests work * fix docs * make style, make quality * skip batch tests * fix test skipping * fix test skipping again * fix for tests * Fix all pipeline test * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * Revert "fix equal size list input" This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c. * update * update * update * update * update --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 13 Jun, 2025 2 commits
-
-
Aryan authored
* support text-to-image * update example * make fix-copies * support use_flow_sigmas in EDM scheduler instead of maintain cosmos-specific scheduler * support video-to-world * update * rename text2image pipeline * make fix-copies * add t2i test * add test for v2w pipeline * support edm dpmsolver multistep * update * update * update * update tests * fix tests * safety checker * make conversion script work without guardrail
-
Sayak Paul authored
* feat: parse metadata from lora state dicts. * tests * fix tests * key renaming * fix * smol update * smol updates * load metadata. * automatically save metadata in save_lora_adapter. * propagate changes. * changes * add test to models too. * tigher tests. * updates * fixes * rename tests. * sorted. * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * review suggestions. * removeprefix. * propagate changes. * fix-copies * sd * docs. * fixes * get review ready. * one more test to catch error. * change to a different approach. * fix-copies. * todo * sd3 * update * revert changes in get_peft_kwargs. * update * fixes * fixes * simplify _load_sft_state_dict_metadata * update * style fix * uipdate * update * update * empty commit * _pack_dict_with_prefix * update * TODO 1. * todo: 2. * todo: 3. * update * update * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * reraise. * move argument. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-