- 06 Jun, 2023 4 commits
-
-
Jason C.H authored
-
Sayak Paul authored
-
Sayak Paul authored
* feat: add lora attention processor for pt 2.0. * explicit context manager for SDPA. * switch to flash attention * make shapes compatible to work optimally with SDPA. * fix: circular import problem. * explicitly specify the flash attention kernel in sdpa * fall back to efficient attention context manager. * remove explicit dispatch. * fix: removed processor. * fix: remove optional from type annotation. * feat: make changes regarding LoRAAttnProcessor2_0. * remove confusing warning. * formatting. * relax tolerance for PT 2.0 * fix: loading message. * remove unnecessary logging. * add: entry to the docs. * add: network_alpha argument. * relax tolerance.
-
Takuma Mori authored
* merge undoable-monkeypatch * remove TEXT_ENCODER_TARGET_MODULES, refactoring * move create_lora_weight_file
-
- 05 Jun, 2023 1 commit
-
-
Vladislav Lyubimov authored
-
- 02 Jun, 2023 2 commits
-
-
Lachlan Nicholson authored
* iterate over unique tokens to avoid duplicate replacements * added test for multiple references to multi embedding * adhere to black formatting * reorder test post-rebase
-
Takuma Mori authored
* add _convert_kohya_lora_to_diffusers * make style * add scaffold * match result: unet attention only * fix monkey-patch for text_encoder * with CLIPAttention While the terrible images are no longer produced, the results do not match those from the hook ver. This may be due to not setting the network_alpha value. * add to support network_alpha * generate diff image * fix monkey-patch for text_encoder * add test_text_encoder_lora_monkey_patch() * verify that it's okay to release the attn_procs * fix closure version * add comment * Revert "fix monkey-patch for text_encoder" This reverts commit bb9c61e6faecc1935c9c4319c77065837655d616. * Fix to reuse utility functions * make LoRAAttnProcessor targets to self_attn * fix LoRAAttnProcessor target * make style * fix split key * Update src/diffusers/loaders.py * remove TEXT_ENCODER_TARGET_MODULES loop * add print memory usage * remove test_kohya_loras_scaffold.py * add: doc on LoRA civitai * remove print statement and refactor in the doc. * fix state_dict test for kohya-ss style lora * Apply suggestions from code review Co-authored-by:
Takuma Mori <takuma104@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 30 May, 2023 1 commit
-
-
Greg Hunkins authored
* enable state dict for textual inversion loader * Empty-Commit | restart CI * Empty-Commit | restart CI * Empty-Commit | restart CI * Empty-Commit | restart CI * add tests * fix tests * fix tests * fix tests --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 26 May, 2023 2 commits
-
-
Takuma Mori authored
Fix to apply LoRAXFormersAttnProcessor instead of LoRAAttnProcessor when xFormers is enabled (#3556) * fix to use LoRAXFormersAttnProcessor * add test * using new LoraLoaderMixin.save_lora_weights * add test_lora_save_load_with_xformers
-
Emin Demirci authored
-
- 17 May, 2023 1 commit
-
-
Patrick von Platen authored
* Correct from_ckpt * make style
-
- 11 May, 2023 1 commit
-
-
Patrick von Platen authored
* Improve checkpointing lora * fix more * Improve doc string * Update src/diffusers/loaders.py * make stytle * Apply suggestions from code review * Update src/diffusers/loaders.py * Apply suggestions from code review * Apply suggestions from code review * better * Fix all * Fix multi-GPU dreambooth * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Fix all * make style * make style --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 09 May, 2023 2 commits
-
-
Steven Liu authored
* clarify safetensor docstring * fix typo * apply feedback
-
Will Berman authored
* update IF stage I pipelines add fixed variance schedulers and lora loading * added kv lora attn processor * allow loading into alternative lora attn processor * make vae optional * throw away predicted variance * allow loading into added kv lora layer * allow load T5 * allow pre compute text embeddings * set new variance type in schedulers * fix copies * refactor all prompt embedding code class prompts are now included in pre-encoding code max tokenizer length is now configurable embedding attention mask is now configurable * fix for when variance type is not defined on scheduler * do not pre compute validation prompt if not present * add example test for if lora dreambooth * add check for train text encoder and pre compute text embeddings
-
- 08 May, 2023 1 commit
-
-
pdoane authored
* Batched load of textual inversions - Only call resize_token_embeddings once per batch as it is the most expensive operation - Allow pretrained_model_name_or_path and token to be an optional list - Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function - Add comment that single files (e.g. .pt/.safetensors) are supported - Add comment for token parameter - Convert token override log message from warning to info * Update src/diffusers/loaders.py Check for duplicate tokens Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update condition for None tokens --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 28 Apr, 2023 1 commit
-
-
Sayak Paul authored
*
👽 qol improvements for LoRA. * better function name? * fix: LoRA weight loading with the new format. * address Patrick's comments. * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com> * change wording around encouraging the use of load_lora_weights(). * fix: function name. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 25 Apr, 2023 2 commits
-
-
Patrick von Platen authored
* Fix docs text inversion * Apply suggestions from code review
-
pdoane authored
When the token used for textual inversion does not have any special symbols (e.g. it is not surrounded by <>), the tokenizer does not properly split the replacement tokens. Adding a space for the padding tokens fixes this.
-
- 20 Apr, 2023 2 commits
-
-
Patrick von Platen authored
-
nupurkmr9 authored
* diffusers==0.14.0 update * custom diffusion update * custom diffusion update * custom diffusion update * custom diffusion update * custom diffusion update * custom diffusion update * custom diffusion * custom diffusion * custom diffusion * custom diffusion * custom diffusion * apply formatting and get rid of bare except. * refactor readme and other minor changes. * misc refactor. * fix: repo_id issue and loaders logging bug. * fix: save_model_card. * fix: save_model_card. * fix: save_model_card. * add: doc entry. * refactor doc,. * custom diffusion * custom diffusion * custom diffusion * apply style. * remove tralining whitespace. * fix: toctree entry. * remove unnecessary print. * custom diffusion * custom diffusion * custom diffusion test * custom diffusion xformer update * custom diffusion xformer update * custom diffusion xformer update --------- Co-authored-by:
Nupur Kumari <nupurkumari@Nupurs-MacBook-Pro.local> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Nupur Kumari <nupurkumari@nupurs-mbp.wifi.local.cmu.edu>
-
- 19 Apr, 2023 1 commit
-
-
1lint authored
* add mixin class for pipeline from original sd ckpt * Improve * make style * merge main into * Improve more * fix more * up * Apply suggestions from code review * finish docs * rename * make style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 12 Apr, 2023 2 commits
-
-
Patrick von Platen authored
* Finish docs textual inversion * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Sayak Paul authored
* add: first draft for a better LoRA enabler. * make fix-copies. * feat: backward compatibility. * add: entry to the docs. * add: tests. * fix: docs. * fix: norm group test for UNet3D. * feat: add support for flat dicts. * add depcrcation message instead of warning.
-
- 31 Mar, 2023 2 commits
-
-
Patrick von Platen authored
-
Guillermo Cique authored
-
- 30 Mar, 2023 1 commit
-
-
Pi Esposito authored
* add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add case for parsing embedding from auto1111 UI format Co-authored-by:
Evan Jones <evan.a.jones3@gmail.com> Co-authored-by:
Ana Tamais <aninhamoraestamais@gmail.com> * fix style after rebase * move textual inversion mixin to loaders * move mixin inheritance to DiffusionPipeline from StableDiffusionPipeline) * update dummy class name * addressed allo comments * fix old dangling import * fix style * proposal * remove bogus * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> * finish * make style * up * fix code quality * fix code quality - again * fix code quality - 3 * fix alt diffusion code quality * fix model editing pipeline * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Finish --------- Co-authored-by:
Evan Jones <evan.a.jones3@gmail.com> Co-authored-by:
Ana Tamais <aninhamoraestamais@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 27 Mar, 2023 1 commit
-
-
Pedro Cuenca authored
* Apply same ruff settings as in transformers See https://github.com/huggingface/transformers/blob/main/pyproject.toml Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com> * Apply new style rules * Style Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com> * style * remove list, ruff wouldn't auto fix. --------- Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com>
-
- 16 Mar, 2023 1 commit
-
-
Nicolas Patry authored
* Adding `use_safetensors` argument to give more control to users about which weights they use. * Doc style. * Rebased (not functional). * Rebased and functional with tests. * Style. * Apply suggestions from code review * Style. * Addressing comments. * Update tests/test_pipelines.py Co-authored-by:
Will Berman <wlbberman@gmail.com> * Black ??? --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com>
-
- 15 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* rename file * rename attention * fix more * rename more * up * more deprecation imports * fixes
-
- 14 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] correct lora saving & loading * fix final * Apply suggestions from code review
-
- 04 Mar, 2023 1 commit
-
-
Nicolas Patry authored
* Fix regression introduced in #2448 * Style.
-
- 03 Mar, 2023 1 commit
-
-
Nicolas Patry authored
* Adding support for `safetensors` and LoRa. * Adding metadata.
-
- 01 Mar, 2023 1 commit
-
-
Patrick von Platen authored
-
- 27 Jan, 2023 1 commit
-
-
Ji soo Kim authored
Fix typo in loaders.py
-
- 18 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] first upload * add first lora version * upload * more * first training * up * correct * improve * finish loaders and inference * up * up * fix more * up * finish more * finish more * up * up * change year * revert year change * Change lines * Add cloneofsimo as co-author. Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> * finish * fix docs * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * upload * finish Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-