- 13 Jun, 2025 1 commit
-
-
Sayak Paul authored
* feat: parse metadata from lora state dicts. * tests * fix tests * key renaming * fix * smol update * smol updates * load metadata. * automatically save metadata in save_lora_adapter. * propagate changes. * changes * add test to models too. * tigher tests. * updates * fixes * rename tests. * sorted. * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * review suggestions. * removeprefix. * propagate changes. * fix-copies * sd * docs. * fixes * get review ready. * one more test to catch error. * change to a different approach. * fix-copies. * todo * sd3 * update * revert changes in get_peft_kwargs. * update * fixes * fixes * simplify _load_sft_state_dict_metadata * update * style fix * uipdate * update * update * empty commit * _pack_dict_with_prefix * update * TODO 1. * todo: 2. * todo: 3. * update * update * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * reraise. * move argument. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 30 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in strings and comments Signed-off-by:
co63oc <co63oc@users.noreply.github.com> * Update src/diffusers/hooks/hooks.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/hooks/hooks.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update layerwise_casting.py * Apply style fixes * update --------- Signed-off-by:
co63oc <co63oc@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 27 May, 2025 1 commit
-
-
Sayak Paul authored
* improve lora fusion tests * more improvements. * remove comment * update * relax tolerance. * num_fused_loras as a property Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> * updates * update * fix * fix Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> * Update src/diffusers/loaders/lora_base.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> --------- Co-authored-by:
BenjaminBossan <benjamin.bossan@gmail.com> Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-
- 22 May, 2025 1 commit
-
-
Sayak Paul authored
* fix peft delete adapters for flux. * add test * empty commit
-
- 06 May, 2025 1 commit
-
-
Yao Matrix authored
* enable lora cases on XPU Signed-off-by:
Yao Matrix <matrix.yao@intel.com> * remove hunyuanvideo xpu expectation Signed-off-by:
Yao Matrix <matrix.yao@intel.com> --------- Signed-off-by:
Yao Matrix <matrix.yao@intel.com>
-
- 15 Apr, 2025 2 commits
-
-
Sayak Paul authored
* post release * update * fix deprecations * remaining * update --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Hameer Abbasi authored
* Add AuraFlowLoraLoaderMixin * Add comments, remove qkv fusion * Add Tests * Add AuraFlowLoraLoaderMixin to documentation * Add Suggested changes * Change attention_kwargs->joint_attention_kwargs * Rebasing derp. * fix * fix * Quality fixes. * make style * `make fix-copies` * `ruff check --fix` * Attept 1 to fix tests. * Attept 2 to fix tests. * Attept 3 to fix tests. * Address review comments. * Rebasing derp. * Get more tests passing by copying from Flux. Address review comments. * `joint_attention_kwargs`->`attention_kwargs` * Add `lora_scale` property for te LoRAs. * Make test better. * Remove useless property. * Skip TE-only tests for AuraFlow. * Support LoRA for non-CLIP TEs. * Restore LoRA tests. * Undo adding LoRA support for non-CLIP TEs. * Undo support for TE in AuraFlow LoRA. * `make fix-copies` * Sync with upstream changes. * Remove unneeded stuff. * Mirror `Lumina2`. * Skip for MPS. * Address review comments. * Remove duplicated code. * Remove unnecessary code. * Remove repeated docs. * Propagate attention. * Fix TE target modules. * MPS fix for LoRA tests. * Unrelated TE LoRA tests fix. * Fix AuraFlow LoRA tests by applying to the right denoiser layers. Co-authored-by:
AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com> * Apply style fixes * empty commit * Fix the repo consistency issues. * Remove unrelated changes. * Style. * Fix `test_lora_fuse_nan`. * fix quality issues. * `pytest.xfail` -> `ValueError`. * Add back `skip_mps`. * Apply style fixes * `make fix-copies` --------- Co-authored-by:
Warlord-K <warlordk28@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 10 Apr, 2025 2 commits
-
-
Sayak Paul authored
* start cleaning up lora test utils for reusability * update * updates * updates
-
Yao Matrix authored
* fix test_vanilla_funetuning failure on XPU and A100 Signed-off-by:
Matrix Yao <matrix.yao@intel.com> * change back to 5e-2 Signed-off-by:
Matrix Yao <matrix.yao@intel.com> --------- Signed-off-by:
Matrix Yao <matrix.yao@intel.com>
-
- 12 Mar, 2025 1 commit
-
-
Sayak Paul authored
* move to warning. * test related changes.
-
- 10 Mar, 2025 2 commits
-
-
Aryan authored
* update * make fix-copies * update
-
Sayak Paul authored
* updates * updates * updates * updates * notebooks revert * fix-copies. * seeing * fix * revert * fixes * fixes * fixes * remove print * fix * conflicts ii. * updates * fixes * better filtering of prefix. --------- Co-authored-by:hlky <hlky@hlky.ac>
-
- 04 Mar, 2025 2 commits
-
-
Aryan authored
* update * refactor image-to-video pipeline * update * fix copied from * use FP32LayerNorm
-
Fanli Lin authored
* initial comit * fix empty cache * fix one more * fix style * update device functions * update * update * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * with gc.collect * update * make style * check_torch_dependencies * add mps empty cache * add changes * bug fix * enable on xpu * update more cases * revert * revert back * Update test_stable_diffusion_xl.py * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py Co-authored-by:
hlky <hlky@hlky.ac> * Apply suggestions from code review Co-authored-by:
hlky <hlky@hlky.ac> * add test marker --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
- 26 Feb, 2025 1 commit
-
-
Sayak Paul authored
fix: lumina2 lora fuse_nan test
-
- 20 Feb, 2025 1 commit
-
-
Sayak Paul authored
* feat: lora support for Lumina2. * fix-copies. * updates * updates * docs. * fix * add: training script. * tests * updates * updates * major updates. * updates * fixes * docs. * updates * updates
-
- 19 Feb, 2025 1 commit
-
-
Sayak Paul authored
* make set_adapters() robust on silent failures. * fixes to tests * flaky decorator. * fix * flaky to sd3. * remove warning. * sort * quality * skip test_simple_inference_with_text_denoiser_multi_adapter_block_lora * skip testing unsupported features. * raise warning instead of error.
-
- 13 Feb, 2025 1 commit
-
-
Aryan authored
* disable peft input autocast * use new peft method name; only disable peft input autocast if submodule layerwise casting active * add test; reference PeftInputAutocastDisableHook in peft docs * add load_lora_weights test * casted -> cast * Update tests/lora/utils.py
-
- 22 Jan, 2025 1 commit
-
-
Aryan authored
* update * update * make style * remove dynamo disable * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * update * update * update * update mixin * add some basic tests * update * update * non_blocking * improvements * update * norm.* -> norm * apply suggestions from review * add example * update hook implementation to the latest changes from pyramid attention broadcast * deinitialize should raise an error * update doc page * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update docs * update * refactor * fix _always_upcast_modules for asym ae and vq_model * fix lumina embedding forward to not depend on weight dtype * refactor tests * add simple lora inference tests * _always_upcast_modules -> _precision_sensitive_module_patterns * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case * check layer dtypes in lora test * fix UNet1DModelTests::test_layerwise_upcasting_inference * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback * skip test in NCSNppModelTests * skip tests for AutoencoderTinyTests * skip tests for AutoencoderOobleckTests * skip tests for UNet1DModelTests - unsupported pytorch operations * layerwise_upcasting -> layerwise_casting * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support * add layerwise fp8 pipeline test * use xfail * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass) * add note about memory consumption on tesla CI runner for failing test --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 10 Jan, 2025 2 commits
-
-
Sayak Paul authored
* print * remove print. * print * update slice. * empty
-
Sayak Paul authored
* allow big lora tests to run on the CI. * print * print. * print * print * print * print * more * print * remove print. * remove print * directly place on cuda. * remove pipeline. * remove * fix * fix * spaces * quality * updates * directly place flux controlnet pipeline on cuda. * torch_device instead of cuda. * style * device placement. * fixes * add big gpu marker for mochi; rename test correctly * address feedback * fix --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 07 Jan, 2025 1 commit
-
-
Aryan authored
* update * fix make copies * update * add relevant markers to the integration test suite. * add copied. * fox-copies * temporarily add print. * directly place on CUDA as CPU isn't that big on the CIO. * fixes to fuse_lora, aryan was right. * fixes --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Jan, 2025 2 commits
-
-
Sayak Paul authored
* fix: lora unloading when using expanded Flux LoRAs. * fix argument name. Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com> * docs. --------- Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com>
-
Sayak Paul authored
add slow and nightly markers to sd3 lora integation.
-
- 02 Jan, 2025 1 commit
-
-
maxs-kan authored
* check for base_layer key in transformer state dict * test_lora_expansion_works_for_absent_keys * check * Update tests/lora/test_lora_layers_flux.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * check * test_lora_expansion_works_for_absent_keys/test_lora_expansion_works_for_extra_keys * absent->extra --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 25 Dec, 2024 1 commit
-
-
Sayak Paul authored
* feat: support unload_lora_weights() for Flux Control. * tighten test * minor * updates * meta device fixes.
-
- 23 Dec, 2024 5 commits
-
-
Aryan authored
* update * make style * update * update * update * make style * single file related changes * update * fix * update single file urls and docs * update * fix
-
Sayak Paul authored
* fixes to tests * fixture * fixes
-
Sayak Paul authored
updates
-
Sayak Paul authored
* misc lora test improvements. * updates * fixes to tests
-
Sayak Paul authored
* sana lora training tests and misc. * remove push to hub * Update examples/dreambooth/train_dreambooth_lora_sana.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 20 Dec, 2024 2 commits
-
-
Sayak Paul authored
add integration tests for lora expansion stuff in Flux.
-
Sayak Paul authored
* lora expansion with dummy zeros. * updates * fix working 🥳 * working. * use torch.device meta for state dict expansion. * tests Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com> * fixes * fixes * switch to debug * fix * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * fix stuff * docs --------- Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 19 Dec, 2024 2 commits
-
-
Aryan authored
fix
-
Shenghai Yuan authored
* 1217 * 1217 * 1217 * update * reverse * add test * update test * make style * update * make style --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 18 Dec, 2024 2 commits
-
-
Aryan authored
remove nullop imports
-
Sayak Paul authored
* feat: lora support for SANA. * make fix-copies * rename test class. * attention_kwargs -> cross_attention_kwargs. * Revert "attention_kwargs -> cross_attention_kwargs." This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b. * exhaust 119 max line limit * sana lora fine-tuning script. * readme * add a note about the supported models. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * style * docs for attention_kwargs. * remove lora_scale from pag pipeline. * copy fix --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 17 Dec, 2024 1 commit
-
-
Aryan authored
* add lora support for ltx * add tests * fix copied from comments * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Dec, 2024 1 commit
-
-
Aryan authored
* add test for expanding lora and normal lora error * Update tests/lora/test_lora_layers_flux.py * fix things. * Update src/diffusers/loaders/peft.py --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 12 Dec, 2024 1 commit
-
-
Sayak Paul authored
* add a test to ensure set_adapters() and attn kwargs outs match * remove print * fix * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * assertFalse. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-