- 05 Jan, 2024 11 commits
-
-
Liang Hou authored
-
Vinh H. Pham authored
* init works * add gluegen pipeline * add gluegen code * add another way to load language adapter * make style * Update README.md * change doc
-
Sayak Paul authored
* add: experimental script for diffusion dpo training. * random_crop cli. * fix: caption tokenization. * fix: pixel_values index. * fix: grad? * debug * fix: reduction. * fixes in the loss calculation. * style * fix: unwrap call. * fix: validation inference. * add: initial sdxl script * debug * make sure images in the tuple are of same res * fix model_max_length * report print * boom * fix: numerical issues. * fix: resolution * comment about resize. * change the order of the training transformation. * save call. * debug * remove print * manually detaching necessary? * use the same vae for validation. * add: readme.
-
Sayak Paul authored
* introduce unload_lora. * fix-copies
-
Sayak Paul authored
* post release * style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Junsheng121 authored
* null-text-inversion-implementation * edited * edited * edited * edited * edited * edit * makestyle --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* edebug * debug * more debug * more more debug * remove tests for LoRAAttnProcessors. * rename
-
Linoy Tsaban authored
* unwrap text encoder when saving hook only for full text encoder tuning * unwrap text encoder when saving hook only for full text encoder tuning * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * Update examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
jiqing-feng authored
* Intel Gen 4 Xeon and later support bf16 * fix bf16 notes
-
Horseee authored
* add documentation for DeepCache * fix typo * add wandb url for DeepCache * fix some typos * add item in _toctree.yml * update formats for arguments * Update deepcache.md * Update docs/source/en/optimization/deepcache.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * add StableDiffusionXLPipeline in doc * Separate SDPipeline and SDXLPipeline * Add the paper link of ablation experiments for hyper-parameters * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
dg845 authored
* Make WDS pipeline interpolation type configurable. * Make the VAE encoding batch size configurable. * Make lora_alpha and lora_dropout configurable for LCM LoRA scripts. * Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable. * Make LoRA target modules configurable for LCM-LoRA scripts. * Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script. * apply suggestions from review
-
- 04 Jan, 2024 10 commits
-
-
Steven Liu authored
fix local links
-
Lucain authored
* Respect offline mode when loading model * default to local entry if connectionerror
-
Sayak Paul authored
-
Sayak Paul authored
* debug * debug test_with_different_scales_fusion_equivalence * use the right method. * place it right. * let's see. * let's see again * alright then. * add a comment.
-
sayakpaul authored
-
sayakpaul authored
-
Sayak Paul authored
* disable running peft non-peft lora test in the peft env. * Empty-Commit
-
Chi authored
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using. * Update src/diffusers/models/unet_2d_blocks.py This changes suggest by maintener. Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/models/unet_2d_blocks.py Add suggested text Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update unet_2d_blocks.py I changed the Parameter to Args text. * Update unet_2d_blocks.py proper indentation set in this file. * Update unet_2d_blocks.py a little bit of change in the act_fun argument line. * I run the black command to reformat style in the code * Update unet_2d_blocks.py similar doc-string add to have in the original diffusion repository. * Batter way to write binarize function * Solve check_code_quality error * My mistake to run pull request but not reformated file * Update image_processor.py * remove extra variable and space * Update image_processor.py * Run ruff libarary to reformat my file --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 03 Jan, 2024 3 commits
-
-
Sayak Paul authored
Update README_sdxl.md
-
Sayak Paul authored
* handle rest of the stuff related to deprecated lora stuff. * fix: copies * don't modify the uNet in-place. * fix: temporal autoencoder. * manually remove lora layers. * don't copy unet. * alright * remove lora attn processors from unet3d * fix: unet3d. * styl * Empty-Commit
-
Sayak Paul authored
* add: test to check if peft loras are loadable in non-peft envs. * add torch_device approrpiately. * fix: get_dummy_inputs(). * test logits. * rename * debug * debug * fix: generator * new assertion values after fixing the seed. * shape * remove print statements and settle this. * to update values. * change values when lora config is initialized under a fixed seed. * update colab link * update notebook link * sanity restored by getting the exact same values without peft.
-
- 02 Jan, 2024 9 commits
-
-
YiYi Xu authored
add doc Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
Vinh H. Pham authored
correct reading variables
-
Aryan V S authored
* add clip_skip, freeu, qkv * fix * add ip-adapter support * callback on step end * update * fix NoneType bug * fix * add guidance scale embedding * add textual inversion
-
Linoy Tsaban authored
[bug fix] using snr gamma and prior preservation loss in the dreambooth lora sdxl training scripts (#6356) * change timesteps used to calculate snr when --with_prior_preservation is enabled * change timesteps used to calculate snr when --with_prior_preservation is enabled (canonical script) * style * revert canonical script to before snr gamma change --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Daniel Socek authored
-
CyrusVorwald authored
* add StableDiffusionXLControlNetInpaintPipeline to auto pipeline * fixed style
-
Fabio Rigano authored
* Add unload_ip_adapter method * Update attn_processors with original layers * Add test * Use set_default_attn_processor --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* start deprecating loraattn. * fix * wrap into unet_lora_state_dict * utilize text_encoder_lora_params * utilize text_encoder_attn_modules * debug * debug * remove print * don't use text encoder for test_stable_diffusion_lora * load the procs. * set_default_attn_processor * fix: set_default_attn_processor call. * fix: lora_components[unet_lora_params] * checking for 3d. * 3d. * more fixes. * debug * debug * debug * debug * more debug * more debug * more debug * more debug * more debug * more debug * hack. * remove comments and prep for a PR. * appropriate set_lora_weights() * fix * fix: test_unload_lora_sd * fix: test_unload_lora_sd * use dfault attebtion processors. * debu * debug nan * debug nan * debug nan * use NaN instead of inf * remove comments. * fix: test_text_encoder_lora_state_dict_unchanged * attention processor default * default attention processors. * default * style
-
lookas authored
* Update value_guided_sampling.py Fix #6409 * Comply code style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 01 Jan, 2024 1 commit
-
-
2510 authored
* Fix gradient-checkpointing option is ignored in SDXL+LoRA training. (#6388) * Fix gradient-checkpointing option is ignored in SD+LoRA training. * Fix gradient checkpoint is not applied to text encoders. (SDXL+LoRA) --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 31 Dec, 2023 1 commit
-
-
Sayak Paul authored
-
- 30 Dec, 2023 4 commits
-
-
apolinário authored
* Add new state_dict_utils to __init__ utils * style --------- Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
apolinário authored
* Add WebUI format support to Advanced Training Script * style --------- Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
apolinário authored
* Create convert_diffusers_sdxl_lora_to_webui.py * Move some conversion logic to utils * fix logging import * Add usage example --------- Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
Sayak Paul authored
remove unnecessary components from lora peft suite/
-
- 29 Dec, 2023 1 commit
-
-
gzguevara authored
-