- 12 Jan, 2024 4 commits
-
-
Suvaditya Mukherjee authored
* update: make controlnet script torch compile compatible Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com> * update: correct earlier mistakes for compilation Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com> * update: fix code style issues Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com> --------- Signed-off-by:
Suvaditya Mukherjee <suvadityamuk@gmail.com>
-
Charchit Sharma authored
* make torch.compile compatible * fix quality
-
Vinh H. Pham authored
support compile
-
Radamés Ajna authored
pass tracker name as argumentw
-
- 11 Jan, 2024 3 commits
-
-
Aryan V S authored
* add stylealigned sdxl pipeline * bugfix * update docs * remove einops dependency * update README * update example docstring
-
Sayak Paul authored
make checkpointing compatible when using torch.compile.
-
dg845 authored
* Fix bug where unet's time_cond_proj_dim is not set correctly if using args.unet_time_cond_proj_dim. * make style
-
- 10 Jan, 2024 1 commit
-
-
Rahul Raman authored
* base template file - train_instruct_pix2pix.py * additional import and parser argument requried for lora * finetune only instructpix2pix model -- no need to include these layers * inject lora layers * freeze unet model -- only lora layers are trained * training modifications to train only lora parameters * store only lora parameters * move train script to research project * run quality and style code checks * move train script to a new folder * add README * update README * update references in README --------- Co-authored-by:
Rahul Raman <rahulraman@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 09 Jan, 2024 3 commits
-
-
Sayak Paul authored
* make it torch.compile comaptible * make the text encoder compatible too. * style
-
Yifan Zhou authored
* upload codes and doc * lint * lint * lint * update code * remove blank lines * Fix load url
-
jiqing-feng authored
* enable stable-xl textual inversion * check if optimizer_2 exists * check text_encoder_2 before using * add textual inversion for sdxl in a single file * fix style * fix example style * reset for error changes * add readme for sdxl * fix style * disable autocast as it will cause cast error when weight_dtype=bf16 * fix spelling error * fix style and readme and 8bit optimizer * add README_sdxl.md link * add tracker key on log_validation * run style * rm the second center crop --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 05 Jan, 2024 8 commits
-
-
Sayak Paul authored
-
Vinh H. Pham authored
* init works * add gluegen pipeline * add gluegen code * add another way to load language adapter * make style * Update README.md * change doc
-
Sayak Paul authored
* add: experimental script for diffusion dpo training. * random_crop cli. * fix: caption tokenization. * fix: pixel_values index. * fix: grad? * debug * fix: reduction. * fixes in the loss calculation. * style * fix: unwrap call. * fix: validation inference. * add: initial sdxl script * debug * make sure images in the tuple are of same res * fix model_max_length * report print * boom * fix: numerical issues. * fix: resolution * comment about resize. * change the order of the training transformation. * save call. * debug * remove print * manually detaching necessary? * use the same vae for validation. * add: readme.
-
Sayak Paul authored
* post release * style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Junsheng121 authored
* null-text-inversion-implementation * edited * edited * edited * edited * edited * edit * makestyle --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Linoy Tsaban authored
* unwrap text encoder when saving hook only for full text encoder tuning * unwrap text encoder when saving hook only for full text encoder tuning * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * Update examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
jiqing-feng authored
* Intel Gen 4 Xeon and later support bf16 * fix bf16 notes
-
dg845 authored
* Make WDS pipeline interpolation type configurable. * Make the VAE encoding batch size configurable. * Make lora_alpha and lora_dropout configurable for LCM LoRA scripts. * Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable. * Make LoRA target modules configurable for LCM-LoRA scripts. * Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script. * apply suggestions from review
-
- 03 Jan, 2024 2 commits
-
-
Sayak Paul authored
Update README_sdxl.md
-
Sayak Paul authored
* handle rest of the stuff related to deprecated lora stuff. * fix: copies * don't modify the uNet in-place. * fix: temporal autoencoder. * manually remove lora layers. * don't copy unet. * alright * remove lora attn processors from unet3d * fix: unet3d. * styl * Empty-Commit
-
- 02 Jan, 2024 2 commits
-
-
Aryan V S authored
* add clip_skip, freeu, qkv * fix * add ip-adapter support * callback on step end * update * fix NoneType bug * fix * add guidance scale embedding * add textual inversion
-
Linoy Tsaban authored
[bug fix] using snr gamma and prior preservation loss in the dreambooth lora sdxl training scripts (#6356) * change timesteps used to calculate snr when --with_prior_preservation is enabled * change timesteps used to calculate snr when --with_prior_preservation is enabled (canonical script) * style * revert canonical script to before snr gamma change --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 01 Jan, 2024 1 commit
-
-
2510 authored
* Fix gradient-checkpointing option is ignored in SDXL+LoRA training. (#6388) * Fix gradient-checkpointing option is ignored in SD+LoRA training. * Fix gradient checkpoint is not applied to text encoders. (SDXL+LoRA) --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 30 Dec, 2023 1 commit
-
-
apolinário authored
* Add WebUI format support to Advanced Training Script * style --------- Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
- 29 Dec, 2023 2 commits
-
-
gzguevara authored
-
gzguevara authored
* files added * fixing code quality * fixing code quality * fixing code quality * fixing code quality * sorted import block * seperated import wandb * ruff on script --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 28 Dec, 2023 1 commit
-
-
Sayak Paul authored
* add to dreambooth lora. * add: t2i lora. * add: sdxl t2i lora. * style * lcm lora sdxl. * unwrap * fix: enable_adapters().
-
- 27 Dec, 2023 7 commits
-
-
apolinário authored
fix keys for lora format on advanced training scripts
-
apolinário authored
* Fix ProdigyOPT in SDXL Dreambooth script * style * style * Add PEFT to Advanced Training Script * style * style *
✨ style✨ * change order for logic operation * add lora alpha * style * Align PEFT to new format * Update train_dreambooth_lora_sdxl_advanced.py Apply #6355 fix --------- Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
Andy W authored
* fix * style fix --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
dg845 authored
Fix bug when creating the guidance embeddings using multiple GPUs. Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Jianqi Pan authored
-
Dhruv Nair authored
* update * update * update * update * update * make style * remove docs * update * move to research folder. * fix-copies * remove _toctree entry. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* spit diffusers-native format from the get go. * rejig the peft_to_diffusers mapping.
-
- 26 Dec, 2023 4 commits
-
-
Will Berman authored
* amused update links to new repo * lint
-
priprapre authored
Replace the link for wandb log with the correct run
-
Sayak Paul authored
* add: script to train lcm lora for sdxl with
🤗 datasets * suit up the args. * remove comments. * fix num_update_steps * fix batch unmarshalling * fix num_update_steps_per_epoch * fix; dataloading. * fix microconditions. * unconditional predictions debug * fix batch size. * no need to use use_auth_token * Apply suggestions from code review Co-authored-by:Suraj Patil <surajp815@gmail.com> * make vae encoding batch size an arg * final serialization in kohya * style * state dict rejigging * feat: no separate teacher unet. * debug * fix state dict serialization * debug * debug * debug * remove prints. * remove kohya utility and make style * fix serialization * fix * add test * add peft dependency. * add: peft * remove peft * autocast device determination from accelerator * autocast * reduce lora rank. * remove unneeded space * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * style * remove prompt dropout. * also save in native diffusers ckpt format. * debug * debug * debug * better formation of the null embeddings. * remove space. * autocast fixes. * autocast fix. * hacky * remove lora_sayak * Apply suggestions from code review Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * style * make log validation leaner. * move back enabled in. * fix: log_validation call. * add: checkpointing tests * taking my chances to see if disabling autocasting has any effect? * start debugging * name * name * name * more debug * more debug * index * remove index. * print length * print length * print length * move unet.train() after add_adapter() * disable some prints. * enable_adapters() manually. * remove prints. * some changes. * fix params_to_optimize * more fixes * debug * debug * remove print * disable grad for certain contexts. * Add support for IPAdapterFull (#5911) * Add support for IPAdapterFull Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Fix a bug in `add_noise` function (#6085) * fix * copies --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> * [Advanced Diffusion Script] Add Widget default text (#6100) add widget * [Advanced Training Script] Fix pipe example (#6106) * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901) * adapter for StableDiffusionControlNetImg2ImgPipeline * fix-copies * fix-copies --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * IP adapter support for most pipelines (#5900) * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py * update tests * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py * revert changes to sd_attend_and_excite and sd_upscale * make style * fix broken tests * update ip-adapter implementation to latest * apply suggestions from review --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix: lora_alpha * make vae casting conditional/ * param upcasting * propagate comments from https://github.com/huggingface/diffusers/pull/6145 Co-authored-by:
dg845 <dgu8957@gmail.com> * [Peft] fix saving / loading when unet is not "unet" (#6046) * [Peft] fix saving / loading when unet is not "unet" * Update src/diffusers/loaders/lora.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * undo stablediffusion-xl changes * use unet_name to get unet for lora helpers * use unet_name --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * [Wuerstchen] fix fp16 training and correct lora args (#6245) fix fp16 training Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * [docs] fix: animatediff docs (#6339) fix: animatediff docs * add: note about the new script in readme_sdxl. * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)" This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40. * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)" This reverts commit 0bb9cf0216e501632677895de6574532092282b5. * Revert "[docs] fix: animatediff docs (#6339)" This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627. * remove tokenize_prompt(). * assistive comments around enable_adapters() and diable_adapters(). --------- Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by:
Fabio Rigano <57982783+fabiorigano@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> Co-authored-by:
Charchit Sharma <charchitsharma11@gmail.com> Co-authored-by:
Aryan V S <contact.aryanvs@gmail.com> Co-authored-by:
dg845 <dgu8957@gmail.com> Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com>
-
Kashif Rasul authored
fix fp16 training Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 25 Dec, 2023 1 commit
-
-
dg845 authored
Change README LCM-LoRA example learning rates to 1e-4.
-