1. 24 Jun, 2024 1 commit
  2. 13 Jun, 2024 1 commit
  3. 29 May, 2024 2 commits
  4. 20 May, 2024 1 commit
  5. 30 Apr, 2024 1 commit
    • Linoy Tsaban's avatar
      Add B-Lora training option to the advanced dreambooth lora script (#7741) · 26a7851e
      Linoy Tsaban authored
      
      
      * add blora
      
      * add blora
      
      * add blora
      
      * add blora
      
      * little changes
      
      * little changes
      
      * remove redundancies
      
      * fixes
      
      * add B LoRA to readme
      
      * style
      
      * inference
      
      * defaults + path to loras+ generation
      
      * minor changes
      
      * style
      
      * minor changes
      
      * minor changes
      
      * blora arg
      
      * added --lora_unet_blocks
      
      * style
      
      * Update examples/advanced_diffusion_training/README.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * add commit hash to B-LoRA repo cloneing
      
      * change inference, remove cloning
      
      * change inference, remove cloning
      add section about configureable unet blocks
      
      * change inference, remove cloning
      add section about configureable unet blocks
      
      * Apply suggestions from code review
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      26a7851e
  6. 02 Apr, 2024 1 commit
    • Bagheera's avatar
      7529 do not disable autocast for cuda devices (#7530) · 8e963d1c
      Bagheera authored
      
      
      * 7529 do not disable autocast for cuda devices
      
      * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
      
      * add autocast fix to other training examples
      
      * disable native_amp for dreambooth (sdxl)
      
      * disable native_amp for pix2pix (sdxl)
      
      * remove tests from remaining files
      
      * disable native_amp on huggingface accelerator for every training example that uses it
      
      * convert more usages of autocast to nullcontext, make style fixes
      
      * make style fixes
      
      * style.
      
      * Empty-Commit
      
      ---------
      Co-authored-by: default avatarbghira <bghira@users.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e963d1c
  7. 27 Mar, 2024 1 commit
  8. 26 Mar, 2024 1 commit
  9. 18 Mar, 2024 1 commit
  10. 14 Mar, 2024 1 commit
  11. 13 Mar, 2024 1 commit
  12. 06 Mar, 2024 1 commit
  13. 04 Mar, 2024 2 commits
  14. 15 Feb, 2024 1 commit
  15. 09 Feb, 2024 2 commits
  16. 08 Feb, 2024 1 commit
  17. 03 Feb, 2024 1 commit
    • Linoy Tsaban's avatar
      [advanced dreambooth lora sdxl script] new features + bug fixes (#6691) · 65329aed
      Linoy Tsaban authored
      
      
      * add noise_offset param
      
      * micro conditioning - wip
      
      * image processing adjusted and moved to support micro conditioning
      
      * change time ids to be computed inside train loop
      
      * change time ids to be computed inside train loop
      
      * change time ids to be computed inside train loop
      
      * time ids shape fix
      
      * move token replacement of validation prompt to the same section of instance prompt and class prompt
      
      * add offset noise to sd15 advanced script
      
      * fix token loading during validation
      
      * fix token loading during validation in sdxl script
      
      * a little clean
      
      * style
      
      * a little clean
      
      * style
      
      * sdxl script - a little clean + minor path fix
      
      sd 1.5 script - change default resolution value
      
      * ad 1.5 script - minor path fix
      
      * fix missing comma in code example in model card
      
      * clean up commented lines
      
      * style
      
      * remove time ids computed outside training loop - no longer used now that we utilize micro-conditioning, as all time ids are now computed inside the training loop
      
      * style
      
      * [WIP] - added draft readme, building off of examples/dreambooth/README.md
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * removed --crops_coords_top_left from CLI args
      
      * style
      
      * fix missing shape bug due to missing RGB if statement
      
      * add blog mention at the start of the reamde as well
      
      * Update examples/advanced_diffusion_training/README.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * change note to render nicely as well
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      65329aed
  18. 24 Jan, 2024 1 commit
    • Brandon Strong's avatar
      SD 1.5 Support For Advanced Lora Training (train_dreambooth_lora_sdxl_advanced.py) (#6449) · 16748d1e
      Brandon Strong authored
      
      
      * sd1.5 support in separate script
      
      A quick adaptation to support people interested in using this method on 1.5 models.
      
      * sd15 prompt text encoding and unet conversions
      
      as per @linoytsaban 's recommendations. Testing would be appreciated,
      
      * Readability and quality improvements
      
      Removed some mentions of SDXL, and some arguments that don't apply to sd 1.5, and cleaned up some comments.
      
      * make style/quality commands
      
      * tracker rename and run-it doc
      
      * Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py
      
      * Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
      16748d1e
  19. 17 Jan, 2024 2 commits
  20. 16 Jan, 2024 1 commit
  21. 05 Jan, 2024 2 commits
  22. 02 Jan, 2024 1 commit
  23. 30 Dec, 2023 1 commit
  24. 27 Dec, 2023 2 commits
  25. 26 Dec, 2023 1 commit
    • Sayak Paul's avatar
      [Training] Add `datasets` version of LCM LoRA SDXL (#5778) · 6683f979
      Sayak Paul authored
      * add: script to train lcm lora for sdxl with 🤗
      
       datasets
      
      * suit up the args.
      
      * remove comments.
      
      * fix num_update_steps
      
      * fix batch unmarshalling
      
      * fix num_update_steps_per_epoch
      
      * fix; dataloading.
      
      * fix microconditions.
      
      * unconditional predictions debug
      
      * fix batch size.
      
      * no need to use use_auth_token
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * make vae encoding batch size an arg
      
      * final serialization in kohya
      
      * style
      
      * state dict rejigging
      
      * feat: no separate teacher unet.
      
      * debug
      
      * fix state dict serialization
      
      * debug
      
      * debug
      
      * debug
      
      * remove prints.
      
      * remove kohya utility and make style
      
      * fix serialization
      
      * fix
      
      * add test
      
      * add peft dependency.
      
      * add: peft
      
      * remove peft
      
      * autocast device determination from accelerator
      
      * autocast
      
      * reduce lora rank.
      
      * remove unneeded space
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * style
      
      * remove prompt dropout.
      
      * also save in native diffusers ckpt format.
      
      * debug
      
      * debug
      
      * debug
      
      * better formation of the null embeddings.
      
      * remove space.
      
      * autocast fixes.
      
      * autocast fix.
      
      * hacky
      
      * remove lora_sayak
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * style
      
      * make log validation leaner.
      
      * move back enabled in.
      
      * fix: log_validation call.
      
      * add: checkpointing tests
      
      * taking my chances to see if disabling autocasting has any effect?
      
      * start debugging
      
      * name
      
      * name
      
      * name
      
      * more debug
      
      * more debug
      
      * index
      
      * remove index.
      
      * print length
      
      * print length
      
      * print length
      
      * move unet.train() after add_adapter()
      
      * disable some prints.
      
      * enable_adapters() manually.
      
      * remove prints.
      
      * some changes.
      
      * fix params_to_optimize
      
      * more fixes
      
      * debug
      
      * debug
      
      * remove print
      
      * disable grad for certain contexts.
      
      * Add support for IPAdapterFull (#5911)
      
      * Add support for IPAdapterFull
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix a bug in `add_noise` function  (#6085)
      
      * fix
      
      * copies
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      
      * [Advanced Diffusion Script] Add Widget default text (#6100)
      
      add widget
      
      * [Advanced Training Script] Fix pipe example (#6106)
      
      * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901)
      
      * adapter for StableDiffusionControlNetImg2ImgPipeline
      
      * fix-copies
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * IP adapter support for most pipelines (#5900)
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
      
      * update tests
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
      
      * revert changes to sd_attend_and_excite and sd_upscale
      
      * make style
      
      * fix broken tests
      
      * update ip-adapter implementation to latest
      
      * apply suggestions from review
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix: lora_alpha
      
      * make vae casting conditional/
      
      * param upcasting
      
      * propagate comments from https://github.com/huggingface/diffusers/pull/6145
      
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      
      * [Peft] fix saving / loading when unet is not "unet" (#6046)
      
      * [Peft] fix saving / loading when unet is not "unet"
      
      * Update src/diffusers/loaders/lora.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * undo stablediffusion-xl changes
      
      * use unet_name to get unet for lora helpers
      
      * use unet_name
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [Wuerstchen] fix fp16 training and correct lora args (#6245)
      
      fix fp16 training
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [docs] fix: animatediff docs (#6339)
      
      fix: animatediff docs
      
      * add: note about the new script in readme_sdxl.
      
      * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)"
      
      This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40.
      
      * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)"
      
      This reverts commit 0bb9cf0216e501632677895de6574532092282b5.
      
      * Revert "[docs] fix: animatediff docs (#6339)"
      
      This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627.
      
      * remove tokenize_prompt().
      
      * assistive comments around enable_adapters() and diable_adapters().
      
      ---------
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarFabio Rigano <57982783+fabiorigano@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avatarCharchit Sharma <charchitsharma11@gmail.com>
      Co-authored-by: default avatarAryan V S <contact.aryanvs@gmail.com>
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      6683f979
  26. 14 Dec, 2023 1 commit
  27. 08 Dec, 2023 2 commits
  28. 06 Dec, 2023 1 commit
  29. 05 Dec, 2023 1 commit
  30. 04 Dec, 2023 3 commits
  31. 01 Dec, 2023 1 commit