"...composable_kernel.git" did not exist on "2c265ebdc9f0f9993fbb205d364fde5d6e42b5e5"
  1. 13 Mar, 2024 1 commit
  2. 09 Feb, 2024 2 commits
  3. 08 Feb, 2024 2 commits
  4. 15 Jan, 2024 1 commit
  5. 11 Jan, 2024 1 commit
  6. 05 Jan, 2024 2 commits
    • Sayak Paul's avatar
      0.25.0 post release (#6358) · 9d945b2b
      Sayak Paul authored
      
      
      * post release
      
      * style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      9d945b2b
    • dg845's avatar
      Improve LCM(-LoRA) Distillation Scripts (#6420) · f3d1333e
      dg845 authored
      * Make WDS pipeline interpolation type configurable.
      
      * Make the VAE encoding batch size configurable.
      
      * Make lora_alpha and lora_dropout configurable for LCM LoRA scripts.
      
      * Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable.
      
      * Make LoRA target modules configurable for LCM-LoRA scripts.
      
      * Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script.
      
      * apply suggestions from review
      f3d1333e
  7. 28 Dec, 2023 1 commit
  8. 27 Dec, 2023 2 commits
  9. 26 Dec, 2023 1 commit
    • Sayak Paul's avatar
      [Training] Add `datasets` version of LCM LoRA SDXL (#5778) · 6683f979
      Sayak Paul authored
      * add: script to train lcm lora for sdxl with 🤗
      
       datasets
      
      * suit up the args.
      
      * remove comments.
      
      * fix num_update_steps
      
      * fix batch unmarshalling
      
      * fix num_update_steps_per_epoch
      
      * fix; dataloading.
      
      * fix microconditions.
      
      * unconditional predictions debug
      
      * fix batch size.
      
      * no need to use use_auth_token
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * make vae encoding batch size an arg
      
      * final serialization in kohya
      
      * style
      
      * state dict rejigging
      
      * feat: no separate teacher unet.
      
      * debug
      
      * fix state dict serialization
      
      * debug
      
      * debug
      
      * debug
      
      * remove prints.
      
      * remove kohya utility and make style
      
      * fix serialization
      
      * fix
      
      * add test
      
      * add peft dependency.
      
      * add: peft
      
      * remove peft
      
      * autocast device determination from accelerator
      
      * autocast
      
      * reduce lora rank.
      
      * remove unneeded space
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * style
      
      * remove prompt dropout.
      
      * also save in native diffusers ckpt format.
      
      * debug
      
      * debug
      
      * debug
      
      * better formation of the null embeddings.
      
      * remove space.
      
      * autocast fixes.
      
      * autocast fix.
      
      * hacky
      
      * remove lora_sayak
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * style
      
      * make log validation leaner.
      
      * move back enabled in.
      
      * fix: log_validation call.
      
      * add: checkpointing tests
      
      * taking my chances to see if disabling autocasting has any effect?
      
      * start debugging
      
      * name
      
      * name
      
      * name
      
      * more debug
      
      * more debug
      
      * index
      
      * remove index.
      
      * print length
      
      * print length
      
      * print length
      
      * move unet.train() after add_adapter()
      
      * disable some prints.
      
      * enable_adapters() manually.
      
      * remove prints.
      
      * some changes.
      
      * fix params_to_optimize
      
      * more fixes
      
      * debug
      
      * debug
      
      * remove print
      
      * disable grad for certain contexts.
      
      * Add support for IPAdapterFull (#5911)
      
      * Add support for IPAdapterFull
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix a bug in `add_noise` function  (#6085)
      
      * fix
      
      * copies
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      
      * [Advanced Diffusion Script] Add Widget default text (#6100)
      
      add widget
      
      * [Advanced Training Script] Fix pipe example (#6106)
      
      * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901)
      
      * adapter for StableDiffusionControlNetImg2ImgPipeline
      
      * fix-copies
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * IP adapter support for most pipelines (#5900)
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
      
      * update tests
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
      
      * revert changes to sd_attend_and_excite and sd_upscale
      
      * make style
      
      * fix broken tests
      
      * update ip-adapter implementation to latest
      
      * apply suggestions from review
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix: lora_alpha
      
      * make vae casting conditional/
      
      * param upcasting
      
      * propagate comments from https://github.com/huggingface/diffusers/pull/6145
      
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      
      * [Peft] fix saving / loading when unet is not "unet" (#6046)
      
      * [Peft] fix saving / loading when unet is not "unet"
      
      * Update src/diffusers/loaders/lora.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * undo stablediffusion-xl changes
      
      * use unet_name to get unet for lora helpers
      
      * use unet_name
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [Wuerstchen] fix fp16 training and correct lora args (#6245)
      
      fix fp16 training
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [docs] fix: animatediff docs (#6339)
      
      fix: animatediff docs
      
      * add: note about the new script in readme_sdxl.
      
      * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)"
      
      This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40.
      
      * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)"
      
      This reverts commit 0bb9cf0216e501632677895de6574532092282b5.
      
      * Revert "[docs] fix: animatediff docs (#6339)"
      
      This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627.
      
      * remove tokenize_prompt().
      
      * assistive comments around enable_adapters() and diable_adapters().
      
      ---------
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarFabio Rigano <57982783+fabiorigano@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avatarCharchit Sharma <charchitsharma11@gmail.com>
      Co-authored-by: default avatarAryan V S <contact.aryanvs@gmail.com>
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      6683f979
  10. 25 Dec, 2023 1 commit
  11. 15 Dec, 2023 1 commit
  12. 06 Dec, 2023 2 commits
  13. 01 Dec, 2023 1 commit
  14. 27 Nov, 2023 1 commit
  15. 09 Nov, 2023 1 commit