- 26 Jul, 2024 1 commit
-
-
Sayak Paul authored
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space * rewrite fuse_lora a bit. * feedback * copy over load_lora_into_text_encoder. * address dhruv's feedback. * fix-copies * fix issubclass. * num_fused_loras * fix * fix * remove mapping * up * fix * style * fix-copies * change to SD3TransformerLoRALoadersMixin * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up * handle wuerstchen * up * move lora to lora_pipeline.py * up * fix-copies * fix documentation. * comment set_adapters(). * fix-copies * fix set_adapters() at the model level. * fix? * fix * loraloadermixin. --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 25 Jul, 2024 2 commits
-
-
Sayak Paul authored
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space * rewrite fuse_lora a bit. * feedback * copy over load_lora_into_text_encoder. * address dhruv's feedback. * fix-copies * fix issubclass. * num_fused_loras * fix * fix * remove mapping * up * fix * style * fix-copies * change to SD3TransformerLoRALoadersMixin * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up * handle wuerstchen * up * move lora to lora_pipeline.py * up * fix-copies * fix documentation. * comment set_adapters(). * fix-copies * fix set_adapters() at the model level. * fix? * fix --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
- 08 Jul, 2024 1 commit
-
-
Tolga Cangöz authored
* Remove unused line --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 03 Jul, 2024 2 commits
-
-
Sayak Paul authored
Revert "[LoRA] introduce `LoraBaseMixin` to promote reusability. (#8670)" This reverts commit a2071a18.
-
Sayak Paul authored
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space
-
- 25 Jun, 2024 1 commit
-
-
Linoy Tsaban authored
* add clip text-encoder training * no dora * text encoder traing fixes * text encoder traing fixes * text encoder training fixes * text encoder training fixes * text encoder training fixes * text encoder training fixes * add text_encoder layers to save_lora * style * fix imports * style * fix text encoder * review changes * review changes * review changes * minor change * add lora tag * style * add readme notes * add tests for clip encoders * style * typo * fixes * style * Update tests/lora/test_lora_layers_sd3.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/README_sd3.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * minor readme change --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Jun, 2024 1 commit
-
-
Álvaro Somoza authored
* fix * add check * key present is checked before * test case draft * aply suggestions * changed testing repo, back to old class * forgot docstring --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Jun, 2024 1 commit
-
-
Sayak Paul authored
* add support for lora fusion in sd3 * add test to ensure fused lora and effective lora produce same outpouts
-
- 18 Jun, 2024 1 commit
-
-
Gæros authored
* [LoRA] text encoder: read the ranks for all the attn modules * In addition to out_proj, read the ranks of adapters for q_proj, k_proj, and v_proj * Allow missing adapters (UNet already supports this) * ruff format loaders.lora * [LoRA] add tests for partial text encoders LoRAs * [LoRA] update test_simple_inference_with_partial_text_lora to be deterministic * [LoRA] comment justifying test_simple_inference_with_partial_text_lora * style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 12 Jun, 2024 1 commit
-
-
Dhruv Nair authored
* up * add sd3 * update * update * add tests * fix copies * fix docs * update * add dreambooth lora * add LoRA * update * update * update * update * import fix * update * Update src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * import fix 2 * update * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/autoencoders/autoencoder_kl.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * update * update * update * fix ckpt id * fix more ids * update * missing doc * Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update' * fix * update * Update src/diffusers/models/autoencoders/autoencoder_kl.py * Update src/diffusers/models/autoencoders/autoencoder_kl.py * note on gated access. * requirements * licensing --------- Co-authored-by:
sayakpaul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 29 May, 2024 1 commit
-
-
Tolga Cangöz authored
* Fix copying mechanism typos * fix copying mecha * Revert, since they are in TODO * Fix copying mechanism
-
- 24 May, 2024 1 commit
-
-
Tolga Cangöz authored
* Fix typos * Fix `pipe.enable_model_cpu_offload()` usage * Fix cpu offloading * Update numbers
-
- 13 May, 2024 1 commit
-
-
Sayak Paul authored
* check * check 2. * update slices
-
- 09 May, 2024 1 commit
-
-
Sayak Paul authored
* debugging * save the resulting image * check if order reversing works. * checking values. * up * okay * checking * fix * remove print
-
- 07 May, 2024 1 commit
-
-
Álvaro Somoza authored
* return layer weight if not found * better system and test * key example and typo
-
- 12 Apr, 2024 1 commit
-
-
Benjamin Bossan authored
Fix a bug that causes the the call to set_lora_device to ignore the DoRA parameters.
-
- 29 Mar, 2024 2 commits
-
-
UmerHA authored
* Initial commit * Implemented block lora - implemented block lora - updated docs - added tests * Finishing up * Reverted unrelated changes made by make style * Fixed typo * Fixed bug + Made text_encoder_2 scalable * Integrated some review feedback * Incorporated review feedback * Fix tests * Made every module configurable * Adapter to new lora test structure * Final cleanup * Some more final fixes - Included examples in `using_peft_for_inference.md` - Added hint that only attns are scaled - Removed NoneTypes - Added test to check mismatching lens of adapter names / weights raise error * Update using_peft_for_inference.md * Update using_peft_for_inference.md * Make style, quality, fix-copies * Updated tutorial;Warning if scale/adapter mismatch * floats are forwarded as-is; changed tutorial scale * make style, quality, fix-copies * Fixed typo in tutorial * Moved some warnings into `lora_loader_utils.py` * Moved scale/lora mismatch warnings back * Integrated final review suggestions * Empty commit to trigger CI * Reverted emoty commit to trigger CI --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
* update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Mar, 2024 1 commit
-
-
UmerHA authored
Skipping test_lora_fuse_nan on mps Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 26 Mar, 2024 1 commit
-
-
Sayak Paul authored
* feat: support dora loras from community * safe-guard dora operations under peft version. * pop use_dora when False * make dora lora from kohya work. * fix: kohya conversion utils. * add a fast test for DoRA compatibility.. * add a nightly test.
-
- 25 Mar, 2024 1 commit
-
-
UmerHA authored
* Update test_lora_layers_peft.py * Update utils.py
-
- 20 Mar, 2024 1 commit
-
-
Sayak Paul authored
* cleanse and refactor lora testing suite. * more cleanup. * make check_if_lora_correctly_set a utility function * fix: typo * retrigger ci * style
-
- 19 Mar, 2024 1 commit
-
-
Sayak Paul authored
* debugging * let's see the numbers * let's see the numbers * let's see the numbers * restrict tolerance. * increase inference steps. * shallow copy of cross_attentionkwargs * remove print
-
- 27 Feb, 2024 2 commits
-
-
Younes Belkada authored
* copy the state dict in load lora weights * fixup
-
jinghuan-Chen authored
* Make LoRACompatibleConv padding_mode work. * Format code style. * add fast test * Update src/diffusers/models/lora.py Simplify the code by patrickvonplaten. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * code refactor * apply patrickvonplaten suggestion to simplify the code. * rm test_lora_layers_old_backend.py and add test case in test_lora_layers_peft.py * update test case. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 13 Feb, 2024 1 commit
-
-
Dhruv Nair authored
update
-
- 09 Feb, 2024 1 commit
-
-
Sayak Paul authored
* deprecate certain lora methods from the old backend. * uncomment necessary things. * safe remove old lora backend
👋
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 22 Jan, 2024 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 05 Jan, 2024 2 commits
-
-
Sayak Paul authored
* introduce unload_lora. * fix-copies
-
Sayak Paul authored
* edebug * debug * more debug * more more debug * remove tests for LoRAAttnProcessors. * rename
-
- 04 Jan, 2024 5 commits
-
-
Sayak Paul authored
* debug * debug test_with_different_scales_fusion_equivalence * use the right method. * place it right. * let's see. * let's see again * alright then. * add a comment.
-
sayakpaul authored
-
sayakpaul authored
-
- 03 Jan, 2024 2 commits
-
-
Sayak Paul authored
* handle rest of the stuff related to deprecated lora stuff. * fix: copies * don't modify the uNet in-place. * fix: temporal autoencoder. * manually remove lora layers. * don't copy unet. * alright * remove lora attn processors from unet3d * fix: unet3d. * styl * Empty-Commit
-
Sayak Paul authored
* add: test to check if peft loras are loadable in non-peft envs. * add torch_device approrpiately. * fix: get_dummy_inputs(). * test logits. * rename * debug * debug * fix: generator * new assertion values after fixing the seed. * shape * remove print statements and settle this. * to update values. * change values when lora config is initialized under a fixed seed. * update colab link * update notebook link * sanity restored by getting the exact same values without peft.
-
- 02 Jan, 2024 1 commit
-
-
Sayak Paul authored
* start deprecating loraattn. * fix * wrap into unet_lora_state_dict * utilize text_encoder_lora_params * utilize text_encoder_attn_modules * debug * debug * remove print * don't use text encoder for test_stable_diffusion_lora * load the procs. * set_default_attn_processor * fix: set_default_attn_processor call. * fix: lora_components[unet_lora_params] * checking for 3d. * 3d. * more fixes. * debug * debug * debug * debug * more debug * more debug * more debug * more debug * more debug * more debug * hack. * remove comments and prep for a PR. * appropriate set_lora_weights() * fix * fix: test_unload_lora_sd * fix: test_unload_lora_sd * use dfault attebtion processors. * debu * debug nan * debug nan * debug nan * use NaN instead of inf * remove comments. * fix: test_text_encoder_lora_state_dict_unchanged * attention processor default * default attention processors. * default * style
-