- 17 Aug, 2023 2 commits
-
-
Sayak Paul authored
-
Patrick von Platen authored
* make safetensors default * set default save method as safetensors * update tests * update to support saving safetensors * update test to account for safetensors default * update example tests to use safetensors * update example to support safetensors * update unet tests for safetensors * fix failing loader tests * fix qc issues * fix pipeline tests * fix example test --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 04 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* correct * correct blocks * finish * finish * finish * Apply suggestions from code review * fix * up * up * up * Update examples/dreambooth/README_sdxl.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Apply suggestions from code review --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 01 Aug, 2023 1 commit
-
-
Will Berman authored
-
- 28 Jul, 2023 1 commit
-
-
Sayak Paul authored
* sdxl lora changes. * better name replacement. * better replacement. * debugging * debugging * debugging * debugging * debugging * remove print. * print state dict keys. * print * distingisuih better * debuggable. * fxi: tyests * fix: arg from training script. * access from class. * run style * debug * save intermediate * some simplifications for SDXL LoRA * styling * unet config is not needed in diffusers format. * fix: dynamic SGM block mapping for SDXL kohya loras (#4322) * Use lora compatible layers for linear proj_in/proj_out (#4323) * improve condition for using the sgm_diffusers mapping * informative comment. * load compatible keys and embedding layer maaping. * Get SDXL 1.0 example lora to load * simplify * specif ranks and hidden sizes. * better handling of k rank and hidden * debug * debug * debug * debug * debug * fix: alpha keys * add check for handling LoRAAttnAddedKVProcessor * sanity comment * modifications for text encoder SDXL * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * denugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * up * up * up * up * up * up * unneeded comments. * unneeded comments. * kwargs for the other attention processors. * kwargs for the other attention processors. * debugging * debugging * debugging * debugging * improve * debugging * debugging * more print * Fix alphas * debugging * debugging * debugging * debugging * debugging * debugging * clean up * clean up. * debugging * fix: text --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Batuhan Taskaya <batuhan@python.org>
-
- 27 Jul, 2023 1 commit
-
-
Xinyang Li authored
-
- 26 Jul, 2023 1 commit
-
-
Patrick von Platen authored
* 0.20.0dev0 * make style
-
- 25 Jul, 2023 1 commit
-
-
Will Berman authored
-
- 18 Jul, 2023 1 commit
-
-
takuoko authored
add rank in dreambooth
-
- 13 Jul, 2023 1 commit
-
-
Ruoxi authored
* Multiply lr scheduler steps by `num_processes`. * Stop multiplying steps by gradient accumulation.
-
- 11 Jul, 2023 1 commit
-
-
Patrick von Platen authored
-
- 09 Jul, 2023 2 commits
-
-
Patrick von Platen authored
-
Will Berman authored
* refactor to support patching LoRA into T5 instantiate the lora linear layer on the same device as the regular linear layer get lora rank from state dict tests fmt can create lora layer in float32 even when rest of model is float16 fix loading model hook remove load_lora_weights_ and T5 dispatching remove Unet#attn_processors_state_dict docstrings * text encoder monkeypatch class method * fix test --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 07 Jul, 2023 1 commit
-
-
Batuhan Taskaya authored
-
- 15 Jun, 2023 2 commits
-
-
Will Berman authored
* manual check for checkpoints_total_limit instead of using accelerate * remove controlnet_conditioning_embedding_out_channels
-
Patrick von Platen authored
* relax tolerance slightly * correct incorrect naming
-
- 08 Jun, 2023 2 commits
-
-
Patrick von Platen authored
* Post release * Post release
-
Zachary Mueller authored
Apply deprecations
-
- 06 Jun, 2023 1 commit
-
-
Sayak Paul authored
* feat: add lora attention processor for pt 2.0. * explicit context manager for SDPA. * switch to flash attention * make shapes compatible to work optimally with SDPA. * fix: circular import problem. * explicitly specify the flash attention kernel in sdpa * fall back to efficient attention context manager. * remove explicit dispatch. * fix: removed processor. * fix: remove optional from type annotation. * feat: make changes regarding LoRAAttnProcessor2_0. * remove confusing warning. * formatting. * relax tolerance for PT 2.0 * fix: loading message. * remove unnecessary logging. * add: entry to the docs. * add: network_alpha argument. * relax tolerance.
-
- 05 Jun, 2023 2 commits
-
-
Patrick von Platen authored
Correct multi gpu
-
Will Berman authored
-
- 02 Jun, 2023 1 commit
-
-
Takuma Mori authored
* add _convert_kohya_lora_to_diffusers * make style * add scaffold * match result: unet attention only * fix monkey-patch for text_encoder * with CLIPAttention While the terrible images are no longer produced, the results do not match those from the hook ver. This may be due to not setting the network_alpha value. * add to support network_alpha * generate diff image * fix monkey-patch for text_encoder * add test_text_encoder_lora_monkey_patch() * verify that it's okay to release the attn_procs * fix closure version * add comment * Revert "fix monkey-patch for text_encoder" This reverts commit bb9c61e6faecc1935c9c4319c77065837655d616. * Fix to reuse utility functions * make LoRAAttnProcessor targets to self_attn * fix LoRAAttnProcessor target * make style * fix split key * Update src/diffusers/loaders.py * remove TEXT_ENCODER_TARGET_MODULES loop * add print memory usage * remove test_kohya_loras_scaffold.py * add: doc on LoRA civitai * remove print statement and refactor in the doc. * fix state_dict test for kohya-ss style lora * Apply suggestions from code review Co-authored-by:
Takuma Mori <takuma104@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 31 May, 2023 1 commit
-
-
Will Berman authored
-
- 24 May, 2023 1 commit
-
-
Sayak Paul authored
refactor save_model_card utility in dreambooth examples.
-
- 22 May, 2023 2 commits
-
-
Patrick von Platen authored
-
Ambrosiussen authored
* DataLoader will now bake in any transforms or image manipulations contained in the EXIF Images may have rotations stored in EXIF. Training using such images will cause those transforms to be ignored while training and thus produce unexpected results * Fixed the Dataloading EXIF issue in main DreamBooth training as well * Run make style (black & isort)
-
- 17 May, 2023 1 commit
-
-
Patrick von Platen authored
* Make dreambooth lora more robust to orig unet * up
-
- 11 May, 2023 1 commit
-
-
Patrick von Platen authored
* Improve checkpointing lora * fix more * Improve doc string * Update src/diffusers/loaders.py * make stytle * Apply suggestions from code review * Update src/diffusers/loaders.py * Apply suggestions from code review * Apply suggestions from code review * better * Fix all * Fix multi-GPU dreambooth * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Fix all * make style * make style --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 09 May, 2023 1 commit
-
-
Will Berman authored
* update IF stage I pipelines add fixed variance schedulers and lora loading * added kv lora attn processor * allow loading into alternative lora attn processor * make vae optional * throw away predicted variance * allow loading into added kv lora layer * allow load T5 * allow pre compute text embeddings * set new variance type in schedulers * fix copies * refactor all prompt embedding code class prompts are now included in pre-encoding code max tokenizer length is now configurable embedding attention mask is now configurable * fix for when variance type is not defined on scheduler * do not pre compute validation prompt if not present * add example test for if lora dreambooth * add check for train text encoder and pre compute text embeddings
-
- 03 May, 2023 1 commit
-
-
Sayak Paul authored
* fix: scale_lr and sync example readme and docs. * fix doc link.
-
- 28 Apr, 2023 1 commit
-
-
Sayak Paul authored
*
👽 qol improvements for LoRA. * better function name? * fix: LoRA weight loading with the new format. * address Patrick's comments. * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com> * change wording around encouraging the use of load_lora_weights(). * fix: function name. --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 26 Apr, 2023 2 commits
-
-
Patrick von Platen authored
* Post release * fix more
-
Patrick von Platen authored
-
- 22 Apr, 2023 1 commit
-
-
Chengrui Wang authored
* Update train_dreambooth_lora.py fix bug * Update train_dreambooth_lora.py
-
- 20 Apr, 2023 1 commit
-
-
Sayak Paul authored
* add: LoRA text encoder support for DreamBooth example. * fix initialization. * fix: modification call. * add: entry in the readme. * use dog dataset from hub. * fix: params to clip. * add entry to the LoRA doc. * add: tests for lora. * remove unnecessary list comprehension./
-
- 12 Apr, 2023 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 04 Apr, 2023 1 commit
-
-
Lucain authored
use upload folder in training scripts Co-authored-by:testbot <lucainp@hf.co>
-
- 15 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* rename file * rename attention * fix more * rename more * up * more deprecation imports * fixes
-
- 10 Mar, 2023 1 commit
-
-
Ruizhe Wang authored
* [Dreambooth] Editable number of class images * 'class_num=None' bug fix --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-