- 13 Feb, 2025 1 commit
-
-
Aryan authored
* disable peft input autocast * use new peft method name; only disable peft input autocast if submodule layerwise casting active * add test; reference PeftInputAutocastDisableHook in peft docs * add load_lora_weights test * casted -> cast * Update tests/lora/utils.py
-
- 22 Jan, 2025 1 commit
-
-
Aryan authored
* update * update * make style * remove dynamo disable * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * update * update * update * update mixin * add some basic tests * update * update * non_blocking * improvements * update * norm.* -> norm * apply suggestions from review * add example * update hook implementation to the latest changes from pyramid attention broadcast * deinitialize should raise an error * update doc page * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update docs * update * refactor * fix _always_upcast_modules for asym ae and vq_model * fix lumina embedding forward to not depend on weight dtype * refactor tests * add simple lora inference tests * _always_upcast_modules -> _precision_sensitive_module_patterns * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case * check layer dtypes in lora test * fix UNet1DModelTests::test_layerwise_upcasting_inference * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback * skip test in NCSNppModelTests * skip tests for AutoencoderTinyTests * skip tests for AutoencoderOobleckTests * skip tests for UNet1DModelTests - unsupported pytorch operations * layerwise_upcasting -> layerwise_casting * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support * add layerwise fp8 pipeline test * use xfail * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass) * add note about memory consumption on tesla CI runner for failing test --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 10 Jan, 2025 2 commits
-
-
Sayak Paul authored
* print * remove print. * print * update slice. * empty
-
Sayak Paul authored
* allow big lora tests to run on the CI. * print * print. * print * print * print * print * more * print * remove print. * remove print * directly place on cuda. * remove pipeline. * remove * fix * fix * spaces * quality * updates * directly place flux controlnet pipeline on cuda. * torch_device instead of cuda. * style * device placement. * fixes * add big gpu marker for mochi; rename test correctly * address feedback * fix --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 07 Jan, 2025 1 commit
-
-
Aryan authored
* update * fix make copies * update * add relevant markers to the integration test suite. * add copied. * fox-copies * temporarily add print. * directly place on CUDA as CPU isn't that big on the CIO. * fixes to fuse_lora, aryan was right. * fixes --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Jan, 2025 2 commits
-
-
Sayak Paul authored
* fix: lora unloading when using expanded Flux LoRAs. * fix argument name. Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com> * docs. --------- Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com>
-
Sayak Paul authored
add slow and nightly markers to sd3 lora integation.
-
- 02 Jan, 2025 1 commit
-
-
maxs-kan authored
* check for base_layer key in transformer state dict * test_lora_expansion_works_for_absent_keys * check * Update tests/lora/test_lora_layers_flux.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * check * test_lora_expansion_works_for_absent_keys/test_lora_expansion_works_for_extra_keys * absent->extra --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 25 Dec, 2024 1 commit
-
-
Sayak Paul authored
* feat: support unload_lora_weights() for Flux Control. * tighten test * minor * updates * meta device fixes.
-
- 23 Dec, 2024 5 commits
-
-
Aryan authored
* update * make style * update * update * update * make style * single file related changes * update * fix * update single file urls and docs * update * fix
-
Sayak Paul authored
* fixes to tests * fixture * fixes
-
Sayak Paul authored
updates
-
Sayak Paul authored
* misc lora test improvements. * updates * fixes to tests
-
Sayak Paul authored
* sana lora training tests and misc. * remove push to hub * Update examples/dreambooth/train_dreambooth_lora_sana.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 20 Dec, 2024 2 commits
-
-
Sayak Paul authored
add integration tests for lora expansion stuff in Flux.
-
Sayak Paul authored
* lora expansion with dummy zeros. * updates * fix working 🥳 * working. * use torch.device meta for state dict expansion. * tests Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com> * fixes * fixes * switch to debug * fix * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * fix stuff * docs --------- Co-authored-by:
a-r-r-o-w <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 19 Dec, 2024 2 commits
-
-
Aryan authored
fix
-
Shenghai Yuan authored
* 1217 * 1217 * 1217 * update * reverse * add test * update test * make style * update * make style --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 18 Dec, 2024 2 commits
-
-
Aryan authored
remove nullop imports
-
Sayak Paul authored
* feat: lora support for SANA. * make fix-copies * rename test class. * attention_kwargs -> cross_attention_kwargs. * Revert "attention_kwargs -> cross_attention_kwargs." This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b. * exhaust 119 max line limit * sana lora fine-tuning script. * readme * add a note about the supported models. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * style * docs for attention_kwargs. * remove lora_scale from pag pipeline. * copy fix --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 17 Dec, 2024 1 commit
-
-
Aryan authored
* add lora support for ltx * add tests * fix copied from comments * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Dec, 2024 1 commit
-
-
Aryan authored
* add test for expanding lora and normal lora error * Update tests/lora/test_lora_layers_flux.py * fix things. * Update src/diffusers/loaders/peft.py --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 12 Dec, 2024 1 commit
-
-
Sayak Paul authored
* add a test to ensure set_adapters() and attn kwargs outs match * remove print * fix * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * assertFalse. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-
- 10 Dec, 2024 1 commit
-
-
Aryan authored
* update --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 05 Dec, 2024 1 commit
-
-
Sayak Paul authored
* fix condition argument in xfail. * revert init changes.
-
- 22 Nov, 2024 1 commit
-
-
Sayak Paul authored
* skip nan lora tests on PyTorch 2.5.1 CPU. * cog * use xfail * correct xfail * add condition * tests
-
- 20 Nov, 2024 2 commits
-
-
raulmosa authored
* Update handle single blocks on _convert_xlabs_flux_lora_to_diffusers to fix bug on updating keys and old_state_dict --------- Co-authored-by:
raul_ar <raul.moreno.salinas@autoretouch.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* feat: add lora support to Mochi-1.
-
- 19 Nov, 2024 1 commit
-
-
Sayak Paul authored
* feat: save_lora_adapter.
-
- 05 Nov, 2024 1 commit
-
-
SahilCarterr authored
* fix test * fix test asser * fix format * Update test_lora_layers_sd3.py
-
- 02 Nov, 2024 1 commit
-
-
Sayak Paul authored
* add first draft. * fix * updates. * updates. * updates * updates * updates. * fix-copies * lora constants. * add tests * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * docstrings. --------- Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-
- 24 Oct, 2024 1 commit
-
-
Sayak Paul authored
* move lora integration tests to nightly./ * remove slow marker in the workflow where not needed.
-
- 16 Oct, 2024 1 commit
-
-
Sayak Paul authored
* log a warning when there are missing keys in the LoRA loading. * handle missing keys and unexpected keys better. * add tests * fix-copies. * updates * tests * concat warning. * Add Differential Diffusion to Kolors (#9423) * Added diff diff support for kolors img2img * Fized relative imports * Fized relative imports * Added diff diff support for Kolors * Fized import issues * Added map * Fized import issues * Fixed naming issues * Added diffdiff support for Kolors img2img pipeline * Removed example docstrings * Added map input * Updated latents Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Updated `original_with_noise` Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Improved code quality --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * FluxMultiControlNetModel (#9647) * tests * Update src/diffusers/loaders/lora_pipeline.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * fix --------- Co-authored-by:
M Saqlain <118016760+saqlain2204@users.noreply.github.com> Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 14 Oct, 2024 1 commit
-
-
SahilCarterr authored
* add lora
-
- 13 Oct, 2024 1 commit
-
-
Sayak Paul authored
increase transformers version in test_low_cpu_mem_usage_with_loading
-
- 10 Oct, 2024 1 commit
-
-
Sayak Paul authored
fix dora test.
-
- 09 Oct, 2024 1 commit
-
-
Sayak Paul authored
* allow loras to be loaded with low_cpu_mem_usage. * add flux support but note https://github.com/huggingface/diffusers/pull/9510\#issuecomment-2378316687 * low_cpu_mem_usage. * fix-copies * fix-copies again * tests * _LOW_CPU_MEM_USAGE_DEFAULT_LORA * _peft_version default. * version checks. * version check. * version check. * version check. * require peft 0.13.1. * explicitly specify low_cpu_mem_usage=False. * docs. * transformers version 4.45.2. * update * fix * empty * better name initialize_dummy_state_dict. * doc todos. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * style * fix-copies --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 08 Oct, 2024 1 commit
-
-
Sayak Paul authored
* handle dora. * print test * debug * fix * fix-copies * update logits * add warning in the test. * make is_dora check consistent. * fix-copies
-
- 07 Oct, 2024 1 commit
-
-
Sayak Paul authored
add a note on the versions.
-
- 30 Sep, 2024 1 commit
-
-
Sayak Paul authored
* support kohya flux loras that have tes.
-