- 16 Sep, 2024 1 commit
-
-
suzukimain authored
* [docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8 Updated documentation as runwayml/stable-diffusion-v1-5 has been removed from Huggingface. * Update docs/source/en/using-diffusers/inpaint.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Replace with stable-diffusion-v1-5/stable-diffusion-v1-5 * Update inpaint.md --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 14 Aug, 2024 1 commit
-
-
Álvaro Somoza authored
* post release * fix quality
-
- 24 Jun, 2024 1 commit
-
-
Vinh H. Pham authored
[train_lcm_distill_lora_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env (#8446) fix num_train_epochs Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 13 Jun, 2024 1 commit
-
-
Sayak Paul authored
post release
-
- 05 Jun, 2024 1 commit
-
-
Tolga Cangöz authored
* Fix typos * Trim trailing whitespaces * Remove a trailing whitespace * chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0 * Revert "chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0" This reverts commit fd742b30b4258106008a6af4d0dd4664904f8595. * pokemon -> naruto * `DPMSolverMultistep` -> `DPMSolverMultistepScheduler` * Improve Markdown stylization * Improve style * Improve style * Refactor pipeline variable names for consistency * up style
-
- 29 May, 2024 1 commit
-
-
Sayak Paul authored
* post release v0.28.0 * style
-
- 16 May, 2024 1 commit
-
-
Alphin Jain authored
Fix conditional teacher model check in train_lcm_distill_lora_sdxl_wds.py Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 10 May, 2024 1 commit
-
-
Mark Van Aken authored
* find & replace all FloatTensors to Tensor * apply formatting * Update torch.FloatTensor to torch.Tensor in the remaining files * formatting * Fix the rest of the places where FloatTensor is used as well as in documentation * formatting * Update new file from FloatTensor to Tensor
-
- 07 May, 2024 1 commit
-
-
Bagheera authored
* 7879 - adjust documentation to use naruto dataset, since pokemon is now gated * replace references to pokemon in docs * more references to pokemon replaced * Japanese translation update --------- Co-authored-by:bghira <bghira@users.github.com>
-
- 11 Apr, 2024 1 commit
-
-
dg845 authored
* Initialize target_unet from unet rather than teacher_unet so that we correctly add time_embedding.cond_proj if necessary. * Use UNet2DConditionModel.from_config to initialize target_unet from unet's config. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Apr, 2024 1 commit
-
-
Bagheera authored
* 7529 do not disable autocast for cuda devices * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue * add autocast fix to other training examples * disable native_amp for dreambooth (sdxl) * disable native_amp for pix2pix (sdxl) * remove tests from remaining files * disable native_amp on huggingface accelerator for every training example that uses it * convert more usages of autocast to nullcontext, make style fixes * make style fixes * style. * Empty-Commit --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 18 Mar, 2024 1 commit
-
-
Sayak Paul authored
* post-release * quality
-
- 13 Mar, 2024 1 commit
-
-
Sayak Paul authored
switch to logger.warning
-
- 09 Feb, 2024 2 commits
-
-
Sayak Paul authored
* post release * style * Empty-Commit --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
-
- 08 Feb, 2024 2 commits
-
-
Sayak Paul authored
change to 2024
-
Srimanth Agastyaraju authored
* Fix: training resume from fp16 for lcm distill lora sdxl * Fix coding quality - run linter * Fix 1 - shift mixed precision cast before optimizer * Fix 2 - State dict errors by removing load_lora_into_unet * Update train_lcm_distill_lora_sdxl.py - Revert default cache dir to None --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Jan, 2024 1 commit
-
-
Sayak Paul authored
create a utility for casting the lora params during training.
-
- 11 Jan, 2024 1 commit
-
-
dg845 authored
* Fix bug where unet's time_cond_proj_dim is not set correctly if using args.unet_time_cond_proj_dim. * make style
-
- 05 Jan, 2024 2 commits
-
-
Sayak Paul authored
* post release * style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
dg845 authored
* Make WDS pipeline interpolation type configurable. * Make the VAE encoding batch size configurable. * Make lora_alpha and lora_dropout configurable for LCM LoRA scripts. * Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable. * Make LoRA target modules configurable for LCM-LoRA scripts. * Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script. * apply suggestions from review
-
- 28 Dec, 2023 1 commit
-
-
Sayak Paul authored
* add to dreambooth lora. * add: t2i lora. * add: sdxl t2i lora. * style * lcm lora sdxl. * unwrap * fix: enable_adapters().
-
- 27 Dec, 2023 2 commits
-
-
Andy W authored
* fix * style fix --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
dg845 authored
Fix bug when creating the guidance embeddings using multiple GPUs. Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 26 Dec, 2023 1 commit
-
-
Sayak Paul authored
* add: script to train lcm lora for sdxl with
🤗 datasets * suit up the args. * remove comments. * fix num_update_steps * fix batch unmarshalling * fix num_update_steps_per_epoch * fix; dataloading. * fix microconditions. * unconditional predictions debug * fix batch size. * no need to use use_auth_token * Apply suggestions from code review Co-authored-by:Suraj Patil <surajp815@gmail.com> * make vae encoding batch size an arg * final serialization in kohya * style * state dict rejigging * feat: no separate teacher unet. * debug * fix state dict serialization * debug * debug * debug * remove prints. * remove kohya utility and make style * fix serialization * fix * add test * add peft dependency. * add: peft * remove peft * autocast device determination from accelerator * autocast * reduce lora rank. * remove unneeded space * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * style * remove prompt dropout. * also save in native diffusers ckpt format. * debug * debug * debug * better formation of the null embeddings. * remove space. * autocast fixes. * autocast fix. * hacky * remove lora_sayak * Apply suggestions from code review Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * style * make log validation leaner. * move back enabled in. * fix: log_validation call. * add: checkpointing tests * taking my chances to see if disabling autocasting has any effect? * start debugging * name * name * name * more debug * more debug * index * remove index. * print length * print length * print length * move unet.train() after add_adapter() * disable some prints. * enable_adapters() manually. * remove prints. * some changes. * fix params_to_optimize * more fixes * debug * debug * remove print * disable grad for certain contexts. * Add support for IPAdapterFull (#5911) * Add support for IPAdapterFull Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Fix a bug in `add_noise` function (#6085) * fix * copies --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> * [Advanced Diffusion Script] Add Widget default text (#6100) add widget * [Advanced Training Script] Fix pipe example (#6106) * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901) * adapter for StableDiffusionControlNetImg2ImgPipeline * fix-copies * fix-copies --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * IP adapter support for most pipelines (#5900) * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py * update tests * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py * revert changes to sd_attend_and_excite and sd_upscale * make style * fix broken tests * update ip-adapter implementation to latest * apply suggestions from review --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix: lora_alpha * make vae casting conditional/ * param upcasting * propagate comments from https://github.com/huggingface/diffusers/pull/6145 Co-authored-by:
dg845 <dgu8957@gmail.com> * [Peft] fix saving / loading when unet is not "unet" (#6046) * [Peft] fix saving / loading when unet is not "unet" * Update src/diffusers/loaders/lora.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * undo stablediffusion-xl changes * use unet_name to get unet for lora helpers * use unet_name --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * [Wuerstchen] fix fp16 training and correct lora args (#6245) fix fp16 training Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * [docs] fix: animatediff docs (#6339) fix: animatediff docs * add: note about the new script in readme_sdxl. * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)" This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40. * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)" This reverts commit 0bb9cf0216e501632677895de6574532092282b5. * Revert "[docs] fix: animatediff docs (#6339)" This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627. * remove tokenize_prompt(). * assistive comments around enable_adapters() and diable_adapters(). --------- Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by:
Fabio Rigano <57982783+fabiorigano@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> Co-authored-by:
Charchit Sharma <charchitsharma11@gmail.com> Co-authored-by:
Aryan V S <contact.aryanvs@gmail.com> Co-authored-by:
dg845 <dgu8957@gmail.com> Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com>
-
- 25 Dec, 2023 1 commit
-
-
dg845 authored
Change README LCM-LoRA example learning rates to 1e-4.
-
- 15 Dec, 2023 1 commit
-
-
dg845 authored
* Clean up comments in LCM(-LoRA) distillation scripts. * Calculate predicted source noise noise_pred correctly for all prediction_types. * make style * apply suggestions from review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Dec, 2023 2 commits
-
-
Lucain authored
* Harmonize HF environment variables + deprecate use_auth_token * fix import * fix
-
Pedro Cuenca authored
* Fix SD scripts - there are only 2 items per batch * Adjustments to make the SDXL scripts work with other datasets * Use public webdataset dataset for examples * make style * Minor tweaks to the readmes. * Stress that the database is illustrative.
-
- 01 Dec, 2023 1 commit
-
-
Patrick von Platen authored
* Post Release: v0.24.0 * post pone deprecation * post pone deprecation * Add model_index.json
-
- 27 Nov, 2023 1 commit
-
-
dg845 authored
* Fix bug related to parsing unet_time_cond_proj_dim. * Fix analogous bug in the SD-XL LCM distillation script.
-
- 09 Nov, 2023 1 commit
-
-
Suraj Patil authored
* add lcm scripts * Co-authored-by: dgu8957@gmail.com
-