1. 12 Jan, 2024 4 commits
  2. 11 Jan, 2024 3 commits
  3. 10 Jan, 2024 1 commit
    • Rahul Raman's avatar
      example: Train Instruct pix2 pix with lora implementation (#6469) · 2d1f2182
      Rahul Raman authored
      
      
      * base template file - train_instruct_pix2pix.py
      
      * additional import and parser argument requried for lora
      
      * finetune only instructpix2pix model -- no need to include these layers
      
      * inject lora layers
      
      * freeze unet model -- only lora layers are trained
      
      * training modifications to train only lora parameters
      
      * store only lora parameters
      
      * move train script to research project
      
      * run quality and style code checks
      
      * move train script to a new folder
      
      * add README
      
      * update README
      
      * update references in README
      
      ---------
      Co-authored-by: default avatarRahul Raman <rahulraman@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      2d1f2182
  4. 09 Jan, 2024 3 commits
  5. 05 Jan, 2024 8 commits
  6. 03 Jan, 2024 2 commits
  7. 02 Jan, 2024 2 commits
  8. 01 Jan, 2024 1 commit
  9. 30 Dec, 2023 1 commit
  10. 29 Dec, 2023 2 commits
  11. 28 Dec, 2023 1 commit
  12. 27 Dec, 2023 7 commits
  13. 26 Dec, 2023 4 commits
    • Will Berman's avatar
      amused update links to new repo (#6344) · 0af12f1f
      Will Berman authored
      * amused update links to new repo
      
      * lint
      0af12f1f
    • priprapre's avatar
      [SDXL-IP2P] Update README_sdxl, Replace the link for wandb log with the correct run (#6270) · fa317044
      priprapre authored
      Replace the link for wandb log with the correct run
      fa317044
    • Sayak Paul's avatar
      [Training] Add `datasets` version of LCM LoRA SDXL (#5778) · 6683f979
      Sayak Paul authored
      * add: script to train lcm lora for sdxl with 🤗
      
       datasets
      
      * suit up the args.
      
      * remove comments.
      
      * fix num_update_steps
      
      * fix batch unmarshalling
      
      * fix num_update_steps_per_epoch
      
      * fix; dataloading.
      
      * fix microconditions.
      
      * unconditional predictions debug
      
      * fix batch size.
      
      * no need to use use_auth_token
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * make vae encoding batch size an arg
      
      * final serialization in kohya
      
      * style
      
      * state dict rejigging
      
      * feat: no separate teacher unet.
      
      * debug
      
      * fix state dict serialization
      
      * debug
      
      * debug
      
      * debug
      
      * remove prints.
      
      * remove kohya utility and make style
      
      * fix serialization
      
      * fix
      
      * add test
      
      * add peft dependency.
      
      * add: peft
      
      * remove peft
      
      * autocast device determination from accelerator
      
      * autocast
      
      * reduce lora rank.
      
      * remove unneeded space
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * style
      
      * remove prompt dropout.
      
      * also save in native diffusers ckpt format.
      
      * debug
      
      * debug
      
      * debug
      
      * better formation of the null embeddings.
      
      * remove space.
      
      * autocast fixes.
      
      * autocast fix.
      
      * hacky
      
      * remove lora_sayak
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * style
      
      * make log validation leaner.
      
      * move back enabled in.
      
      * fix: log_validation call.
      
      * add: checkpointing tests
      
      * taking my chances to see if disabling autocasting has any effect?
      
      * start debugging
      
      * name
      
      * name
      
      * name
      
      * more debug
      
      * more debug
      
      * index
      
      * remove index.
      
      * print length
      
      * print length
      
      * print length
      
      * move unet.train() after add_adapter()
      
      * disable some prints.
      
      * enable_adapters() manually.
      
      * remove prints.
      
      * some changes.
      
      * fix params_to_optimize
      
      * more fixes
      
      * debug
      
      * debug
      
      * remove print
      
      * disable grad for certain contexts.
      
      * Add support for IPAdapterFull (#5911)
      
      * Add support for IPAdapterFull
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix a bug in `add_noise` function  (#6085)
      
      * fix
      
      * copies
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      
      * [Advanced Diffusion Script] Add Widget default text (#6100)
      
      add widget
      
      * [Advanced Training Script] Fix pipe example (#6106)
      
      * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901)
      
      * adapter for StableDiffusionControlNetImg2ImgPipeline
      
      * fix-copies
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * IP adapter support for most pipelines (#5900)
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
      
      * update tests
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
      
      * revert changes to sd_attend_and_excite and sd_upscale
      
      * make style
      
      * fix broken tests
      
      * update ip-adapter implementation to latest
      
      * apply suggestions from review
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix: lora_alpha
      
      * make vae casting conditional/
      
      * param upcasting
      
      * propagate comments from https://github.com/huggingface/diffusers/pull/6145
      
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      
      * [Peft] fix saving / loading when unet is not "unet" (#6046)
      
      * [Peft] fix saving / loading when unet is not "unet"
      
      * Update src/diffusers/loaders/lora.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * undo stablediffusion-xl changes
      
      * use unet_name to get unet for lora helpers
      
      * use unet_name
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [Wuerstchen] fix fp16 training and correct lora args (#6245)
      
      fix fp16 training
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [docs] fix: animatediff docs (#6339)
      
      fix: animatediff docs
      
      * add: note about the new script in readme_sdxl.
      
      * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)"
      
      This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40.
      
      * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)"
      
      This reverts commit 0bb9cf0216e501632677895de6574532092282b5.
      
      * Revert "[docs] fix: animatediff docs (#6339)"
      
      This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627.
      
      * remove tokenize_prompt().
      
      * assistive comments around enable_adapters() and diable_adapters().
      
      ---------
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarFabio Rigano <57982783+fabiorigano@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avatarCharchit Sharma <charchitsharma11@gmail.com>
      Co-authored-by: default avatarAryan V S <contact.aryanvs@gmail.com>
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      6683f979
    • Kashif Rasul's avatar
      [Wuerstchen] fix fp16 training and correct lora args (#6245) · 35b81fff
      Kashif Rasul authored
      
      
      fix fp16 training
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      35b81fff
  14. 25 Dec, 2023 1 commit