1. 10 Jan, 2024 1 commit
    • Rahul Raman's avatar
      example: Train Instruct pix2 pix with lora implementation (#6469) · 2d1f2182
      Rahul Raman authored
      
      
      * base template file - train_instruct_pix2pix.py
      
      * additional import and parser argument requried for lora
      
      * finetune only instructpix2pix model -- no need to include these layers
      
      * inject lora layers
      
      * freeze unet model -- only lora layers are trained
      
      * training modifications to train only lora parameters
      
      * store only lora parameters
      
      * move train script to research project
      
      * run quality and style code checks
      
      * move train script to a new folder
      
      * add README
      
      * update README
      
      * update references in README
      
      ---------
      Co-authored-by: default avatarRahul Raman <rahulraman@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      2d1f2182
  2. 09 Jan, 2024 3 commits
  3. 05 Jan, 2024 8 commits
  4. 03 Jan, 2024 2 commits
  5. 02 Jan, 2024 2 commits
  6. 01 Jan, 2024 1 commit
  7. 30 Dec, 2023 1 commit
  8. 29 Dec, 2023 2 commits
  9. 28 Dec, 2023 1 commit
  10. 27 Dec, 2023 7 commits
  11. 26 Dec, 2023 4 commits
    • Will Berman's avatar
      amused update links to new repo (#6344) · 0af12f1f
      Will Berman authored
      * amused update links to new repo
      
      * lint
      0af12f1f
    • priprapre's avatar
      [SDXL-IP2P] Update README_sdxl, Replace the link for wandb log with the correct run (#6270) · fa317044
      priprapre authored
      Replace the link for wandb log with the correct run
      fa317044
    • Sayak Paul's avatar
      [Training] Add `datasets` version of LCM LoRA SDXL (#5778) · 6683f979
      Sayak Paul authored
      * add: script to train lcm lora for sdxl with 🤗
      
       datasets
      
      * suit up the args.
      
      * remove comments.
      
      * fix num_update_steps
      
      * fix batch unmarshalling
      
      * fix num_update_steps_per_epoch
      
      * fix; dataloading.
      
      * fix microconditions.
      
      * unconditional predictions debug
      
      * fix batch size.
      
      * no need to use use_auth_token
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * make vae encoding batch size an arg
      
      * final serialization in kohya
      
      * style
      
      * state dict rejigging
      
      * feat: no separate teacher unet.
      
      * debug
      
      * fix state dict serialization
      
      * debug
      
      * debug
      
      * debug
      
      * remove prints.
      
      * remove kohya utility and make style
      
      * fix serialization
      
      * fix
      
      * add test
      
      * add peft dependency.
      
      * add: peft
      
      * remove peft
      
      * autocast device determination from accelerator
      
      * autocast
      
      * reduce lora rank.
      
      * remove unneeded space
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * style
      
      * remove prompt dropout.
      
      * also save in native diffusers ckpt format.
      
      * debug
      
      * debug
      
      * debug
      
      * better formation of the null embeddings.
      
      * remove space.
      
      * autocast fixes.
      
      * autocast fix.
      
      * hacky
      
      * remove lora_sayak
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * style
      
      * make log validation leaner.
      
      * move back enabled in.
      
      * fix: log_validation call.
      
      * add: checkpointing tests
      
      * taking my chances to see if disabling autocasting has any effect?
      
      * start debugging
      
      * name
      
      * name
      
      * name
      
      * more debug
      
      * more debug
      
      * index
      
      * remove index.
      
      * print length
      
      * print length
      
      * print length
      
      * move unet.train() after add_adapter()
      
      * disable some prints.
      
      * enable_adapters() manually.
      
      * remove prints.
      
      * some changes.
      
      * fix params_to_optimize
      
      * more fixes
      
      * debug
      
      * debug
      
      * remove print
      
      * disable grad for certain contexts.
      
      * Add support for IPAdapterFull (#5911)
      
      * Add support for IPAdapterFull
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix a bug in `add_noise` function  (#6085)
      
      * fix
      
      * copies
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      
      * [Advanced Diffusion Script] Add Widget default text (#6100)
      
      add widget
      
      * [Advanced Training Script] Fix pipe example (#6106)
      
      * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901)
      
      * adapter for StableDiffusionControlNetImg2ImgPipeline
      
      * fix-copies
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * IP adapter support for most pipelines (#5900)
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
      
      * update tests
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
      
      * revert changes to sd_attend_and_excite and sd_upscale
      
      * make style
      
      * fix broken tests
      
      * update ip-adapter implementation to latest
      
      * apply suggestions from review
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix: lora_alpha
      
      * make vae casting conditional/
      
      * param upcasting
      
      * propagate comments from https://github.com/huggingface/diffusers/pull/6145
      
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      
      * [Peft] fix saving / loading when unet is not "unet" (#6046)
      
      * [Peft] fix saving / loading when unet is not "unet"
      
      * Update src/diffusers/loaders/lora.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * undo stablediffusion-xl changes
      
      * use unet_name to get unet for lora helpers
      
      * use unet_name
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [Wuerstchen] fix fp16 training and correct lora args (#6245)
      
      fix fp16 training
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [docs] fix: animatediff docs (#6339)
      
      fix: animatediff docs
      
      * add: note about the new script in readme_sdxl.
      
      * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)"
      
      This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40.
      
      * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)"
      
      This reverts commit 0bb9cf0216e501632677895de6574532092282b5.
      
      * Revert "[docs] fix: animatediff docs (#6339)"
      
      This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627.
      
      * remove tokenize_prompt().
      
      * assistive comments around enable_adapters() and diable_adapters().
      
      ---------
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarFabio Rigano <57982783+fabiorigano@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avatarCharchit Sharma <charchitsharma11@gmail.com>
      Co-authored-by: default avatarAryan V S <contact.aryanvs@gmail.com>
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      6683f979
    • Kashif Rasul's avatar
      [Wuerstchen] fix fp16 training and correct lora args (#6245) · 35b81fff
      Kashif Rasul authored
      
      
      fix fp16 training
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      35b81fff
  12. 25 Dec, 2023 3 commits
    • dg845's avatar
      Change LCM-LoRA README Script Example Learning Rates to 1e-4 (#6304) · a3d31e3a
      dg845 authored
      Change README LCM-LoRA example learning rates to 1e-4.
      a3d31e3a
    • Jianqi Pan's avatar
      fix: cannot set guidance_scale (#6326) · 84c403ae
      Jianqi Pan authored
      fix: set guidance_scale
      84c403ae
    • Sayak Paul's avatar
      [Tests] Speed up example tests (#6319) · f4b0b26f
      Sayak Paul authored
      * remove validation args from textual onverson tests
      
      * reduce number of train steps in textual inversion tests
      
      * fix: directories.
      
      * debig
      
      * fix: directories.
      
      * remove validation tests from textual onversion
      
      * try reducing the time of test_text_to_image_checkpointing_use_ema
      
      * fix: directories
      
      * speed up test_text_to_image_checkpointing
      
      * speed up test_text_to_image_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
      
      * fix
      
      * speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
      
      * set checkpoints_total_limit to 2.
      
      * test_text_to_image_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints speed up
      
      * speed up test_unconditional_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
      
      * debug
      
      * fix: directories.
      
      * speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit
      
      * speed up: test_controlnet_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
      
      * speed up test_controlnet_sdxl
      
      * speed up dreambooth tests
      
      * speed up test_dreambooth_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
      
      * speed up test_custom_diffusion_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
      
      * speed up test_text_to_image_lora_sdxl_text_encoder_checkpointing_checkpoints_total_limit
      
      * speed up # checkpoint-2 should have been deleted
      
      * speed up examples/text_to_image/test_text_to_image.py::TextToImage::test_text_to_image_checkpointing_checkpoints_total_limit
      
      * additional speed ups
      
      * style
      f4b0b26f
  13. 24 Dec, 2023 2 commits
  14. 22 Dec, 2023 2 commits
  15. 21 Dec, 2023 1 commit
    • Will Berman's avatar
      open muse (#5437) · 40398152
      Will Berman authored
      
      
      amused
      
      rename
      
      Update docs/source/en/api/pipelines/amused.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      AdaLayerNormContinuous default values
      
      custom micro conditioning
      
      micro conditioning docs
      
      put lookup from codebook in constructor
      
      fix conversion script
      
      remove manual fused flash attn kernel
      
      add training script
      
      temp remove training script
      
      add dummy gradient checkpointing func
      
      clarify temperatures is an instance variable by setting it
      
      remove additional SkipFF block args
      
      hardcode norm args
      
      rename tests folder
      
      fix paths and samples
      
      fix tests
      
      add training script
      
      training readme
      
      lora saving and loading
      
      non-lora saving/loading
      
      some readme fixes
      
      guards
      
      Update docs/source/en/api/pipelines/amused.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      Update examples/amused/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      Update examples/amused/train_amused.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      vae upcasting
      
      add fp16 integration tests
      
      use tuple for micro cond
      
      copyrights
      
      remove casts
      
      delegate to torch.nn.LayerNorm
      
      move temperature to pipeline call
      
      upsampling/downsampling changes
      40398152