"eigen-master/doc/examples/DenseBase_middleCols_int.cpp" did not exist on "e7df86554156b36846008d8ddbcc4d8521a16554"
- 20 Sep, 2023 1 commit
-
-
Sayak Paul authored
* better condition. * debugging * how about now? * how about now? * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * support for lycoris. * style * add: lycoris test * fix from_pretrained call. * fix assertion values.
-
- 18 Sep, 2023 1 commit
-
-
Patrick von Platen authored
* [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests
-
- 13 Sep, 2023 2 commits
-
-
Patrick von Platen authored
* fix lora fuse unfuse * add same changes to loaders.py * add test --------- Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
Sayak Paul authored
[Tests and Docs] Add a test on serializing pipelines with components containing fused LoRA modules (#4962) * add: test to ensure pipelines can be saved with fused lora modules. * add docs about serialization with fused lora. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Empty-Commit * Update docs/source/en/training/lora.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 11 Sep, 2023 2 commits
-
-
Dhruv Nair authored
* initial commit * move modules to import struct * add dummy objects and _LazyModule * add lazy import to schedulers * clean up unused imports * lazy import on models module * lazy import for schedulers module * add lazy import to pipelines module * lazy import altdiffusion * lazy import audio diffusion * lazy import audioldm * lazy import consistency model * lazy import controlnet * lazy import dance diffusion ddim ddpm * lazy import deepfloyd * lazy import kandinksy * lazy imports * lazy import semantic diffusion * lazy imports * lazy import stable diffusion * move sd output to its own module * clean up * lazy import t2iadapter * lazy import unclip * lazy import versatile and vq diffsuion * lazy import vq diffusion * helper to fetch objects from modules * lazy import sdxl * lazy import txt2vid * lazy import stochastic karras * fix model imports * fix bug * lazy import * clean up * clean up * fixes for tests * fixes for tests * clean up * remove import of torch_utils from utils module * clean up * clean up * fix mistake import statement * dedicated modules for exporting and loading * remove testing utils from utils module * fixes from merge conflicts * Update src/diffusers/pipelines/kandinsky2_2/__init__.py * fix docs * fix alt diffusion copied from * fix check dummies * fix more docs * remove accelerate import from utils module * add type checking * make style * fix check dummies * remove torch import from xformers check * clean up error message * fixes after upstream merges * dummy objects fix * fix tests * remove unused module import --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Will Berman authored
* Revert "Temp Revert "[Core] better support offloading when side loading is enabled… (#4927)" This reverts commit 2ab17049. * tests: install accelerate from main
-
- 09 Sep, 2023 1 commit
-
-
Will Berman authored
Revert "[Core] better support offloading when side loading is enabled. (#4855)" This reverts commit e4b8e792.
-
- 05 Sep, 2023 2 commits
-
-
Patrick von Platen authored
* [Test] Reduce CPU memory * [Test] Reduce CPU memory
-
Sayak Paul authored
* better support offloading when side loading is enabled. * load_textual_inversion * better messaging for textual inversion. * fixes * address PR feedback. * sdxl support. * improve messaging * recursive removal when cpu sequential offloading is enabled. * add: lora tests * recruse. * add: offload tests for textual inversion.
-
- 04 Sep, 2023 1 commit
-
-
Sayak Paul authored
* throw warning when more than one lora is attempted to be fused. * introduce support of lora scale during fusion. * change test name * changes * change to _lora_scale * lora_scale to call whenever applicable. * debugging * lora_scale additional. * cross_attention_kwargs * lora_scale -> scale. * lora_scale fix * lora_scale in patched projection. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * styling. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * remove unneeded prints. * remove unneeded prints. * assign cross_attention_kwargs. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * clean up. * refactor scale retrieval logic a bit. * fix nonetypw * fix: tests * add more tests * more fixes. * figure out a way to pass lora_scale. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * unify the retrieval logic of lora_scale. * move adjust_lora_scale_text_encoder to lora.py. * introduce dynamic adjustment lora scale support to sd * fix up copies * Empty-Commit * add: test to check fusion equivalence on different scales. * handle lora fusion warning. * make lora smaller * make lora smaller * make lora smaller --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 30 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* Fix Unfuse Lora * add tests * Fix more * Fix more * Fix all * make style * make style
-
- 29 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* Fuse loras * initial implementation. * add slow test one. * styling * add: test for checking efficiency * print * position * place model offload correctly * style * style. * unfuse test. * final checks * remove warning test * remove warnings altogether * debugging * tighten up tests. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * denugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debuging * debugging * debugging * debugging * suit up the generator initialization a bit. * remove print * update assertion. * debugging * remove print. * fix: assertions. * style * can generator be a problem? * generator * correct tests. * support text encoder lora fusion. * tighten up tests. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 28 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* [LoRA Attn] Refactor LoRA attn * correct for network alphas * fix more * fix more tests * fix more tests * Move below * Finish * better version * correct serialization format * fix * fix more * fix more * fix more * Apply suggestions from code review * Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py * deprecation * relax atol for slow test slighly * Finish tests * make style * make style
-
- 26 Aug, 2023 1 commit
-
-
Patrick von Platen authored
* Fix last ben sdxl lora * Correct typo * make style
-
- 17 Aug, 2023 4 commits
-
-
Sayak Paul authored
-
Patrick von Platen authored
* make safetensors default * set default save method as safetensors * update tests * update to support saving safetensors * update test to account for safetensors default * update example tests to use safetensors * update example to support safetensors * update unet tests for safetensors * fix failing loader tests * fix qc issues * fix pipeline tests * fix example test --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
Batuhan Taskaya authored
* Support higher dimension LoRAs * add: tests * fix: assertion values. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Scott Lessans authored
* fixed * add: tests --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 04 Aug, 2023 1 commit
-
-
Sayak Paul authored
* add: integration tests for SDXL LoRAs. * change pipeline class. * fix assertion values. * print values again. * let's see. * let's see. * let's see. * finish
-
- 02 Aug, 2023 1 commit
-
-
Sayak Paul authored
* temporarily disable text encoder loras. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debbuging. * modify doc. * rename tests. * print slices. * fix: assertions * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 28 Jul, 2023 1 commit
-
-
Sayak Paul authored
* sdxl lora changes. * better name replacement. * better replacement. * debugging * debugging * debugging * debugging * debugging * remove print. * print state dict keys. * print * distingisuih better * debuggable. * fxi: tyests * fix: arg from training script. * access from class. * run style * debug * save intermediate * some simplifications for SDXL LoRA * styling * unet config is not needed in diffusers format. * fix: dynamic SGM block mapping for SDXL kohya loras (#4322) * Use lora compatible layers for linear proj_in/proj_out (#4323) * improve condition for using the sgm_diffusers mapping * informative comment. * load compatible keys and embedding layer maaping. * Get SDXL 1.0 example lora to load * simplify * specif ranks and hidden sizes. * better handling of k rank and hidden * debug * debug * debug * debug * debug * fix: alpha keys * add check for handling LoRAAttnAddedKVProcessor * sanity comment * modifications for text encoder SDXL * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * denugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * up * up * up * up * up * up * unneeded comments. * unneeded comments. * kwargs for the other attention processors. * kwargs for the other attention processors. * debugging * debugging * debugging * debugging * improve * debugging * debugging * more print * Fix alphas * debugging * debugging * debugging * debugging * debugging * debugging * clean up * clean up. * debugging * fix: text --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Batuhan Taskaya <batuhan@python.org>
-
- 27 Jul, 2023 1 commit
-
-
Patrick von Platen authored
* [Local loading] Correct bug with local files only * file not found error * fix * finish
-
- 25 Jul, 2023 2 commits
-
-
Batuhan Taskaya authored
* Support to load Kohya-ss style LoRA file format (without restrictions) Co-Authored-By:
Takuma Mori <takuma104@gmail.com> Co-Authored-By:
Sayak Paul <spsayakpaul@gmail.com> * tmp: add sdxl to mlp_modules --------- Co-authored-by:
Takuma Mori <takuma104@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* Allow low precision sd xl * finish * finish * feat: initial draft for supporting text encoder lora finetuning for SDXL DreamBooth * fix: variable assignments. * add: autocast block. * add debugging * vae dtype hell * fix: vae dtype hell. * fix: vae dtype hell 3. * clean up * lora text encoder loader. * fix: unwrapping models. * add: tests. * docs. * handle unexpected keys. * fix vae dtype in the final inference. * fix scope problem. * fix: save_model_card args. * initialize: prefix to None. * fix: dtype issues. * apply gixes. * debgging. * debugging * debugging * debugging * debugging * debugging * add: fast tests. * pre-tokenize. * address: will's comments. * fix: loader and tests. * fix: dataloader. * simplify dataloader. * length. * simplification. * make style && make quality * simplify state_dict munging * fix: tests. * fix: state_dict packing. * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 21 Jul, 2023 1 commit
-
-
Batuhan Taskaya authored
-
- 14 Jul, 2023 1 commit
-
-
Sayak Paul authored
* add: test for testing unloading lora. * add :reason to skipif. * initial implementation of lora unload(). * apply styling. * add: doc. * change checkpoints. * reinit generator * finalize slow test. * add fast test for unloading lora.
-
- 09 Jul, 2023 1 commit
-
-
Will Berman authored
* refactor to support patching LoRA into T5 instantiate the lora linear layer on the same device as the regular linear layer get lora rank from state dict tests fmt can create lora layer in float32 even when rest of model is float16 fix loading model hook remove load_lora_weights_ and T5 dispatching remove Unet#attn_processors_state_dict docstrings * text encoder monkeypatch class method * fix test --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Jun, 2023 3 commits
-
-
Patrick von Platen authored
* Add draft for lora text encoder scale * Improve naming * fix: training dreambooth lora script. * Apply suggestions from code review * Update examples/dreambooth/train_dreambooth_lora.py * Apply suggestions from code review * Apply suggestions from code review * add lora mixin when fit * add lora mixin when fit * add lora mixin when fit * fix more * fix more --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* feat: add lora attention processor for pt 2.0. * explicit context manager for SDPA. * switch to flash attention * make shapes compatible to work optimally with SDPA. * fix: circular import problem. * explicitly specify the flash attention kernel in sdpa * fall back to efficient attention context manager. * remove explicit dispatch. * fix: removed processor. * fix: remove optional from type annotation. * feat: make changes regarding LoRAAttnProcessor2_0. * remove confusing warning. * formatting. * relax tolerance for PT 2.0 * fix: loading message. * remove unnecessary logging. * add: entry to the docs. * add: network_alpha argument. * relax tolerance.
-
Takuma Mori authored
* merge undoable-monkeypatch * remove TEXT_ENCODER_TARGET_MODULES, refactoring * move create_lora_weight_file
-
- 02 Jun, 2023 1 commit
-
-
Takuma Mori authored
* add _convert_kohya_lora_to_diffusers * make style * add scaffold * match result: unet attention only * fix monkey-patch for text_encoder * with CLIPAttention While the terrible images are no longer produced, the results do not match those from the hook ver. This may be due to not setting the network_alpha value. * add to support network_alpha * generate diff image * fix monkey-patch for text_encoder * add test_text_encoder_lora_monkey_patch() * verify that it's okay to release the attn_procs * fix closure version * add comment * Revert "fix monkey-patch for text_encoder" This reverts commit bb9c61e6faecc1935c9c4319c77065837655d616. * Fix to reuse utility functions * make LoRAAttnProcessor targets to self_attn * fix LoRAAttnProcessor target * make style * fix split key * Update src/diffusers/loaders.py * remove TEXT_ENCODER_TARGET_MODULES loop * add print memory usage * remove test_kohya_loras_scaffold.py * add: doc on LoRA civitai * remove print statement and refactor in the doc. * fix state_dict test for kohya-ss style lora * Apply suggestions from code review Co-authored-by:
Takuma Mori <takuma104@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 26 May, 2023 1 commit
-
-
Takuma Mori authored
Fix to apply LoRAXFormersAttnProcessor instead of LoRAAttnProcessor when xFormers is enabled (#3556) * fix to use LoRAXFormersAttnProcessor * add test * using new LoraLoaderMixin.save_lora_weights * add test_lora_save_load_with_xformers
-
- 20 Apr, 2023 1 commit
-
-
Patrick von Platen authored
-
- 17 Apr, 2023 1 commit
-
-
Patrick von Platen authored
-
- 13 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* [Tests] parallelize * finish folder structuring * Parallelize tests more * Correct saving of pipelines * make sure logging level is correct * try again * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 12 Apr, 2023 1 commit
-
-
Sayak Paul authored
* add: first draft for a better LoRA enabler. * make fix-copies. * feat: backward compatibility. * add: entry to the docs. * add: tests. * fix: docs. * fix: norm group test for UNet3D. * feat: add support for flat dicts. * add depcrcation message instead of warning.
-