1. 16 Sep, 2024 1 commit
  2. 14 Aug, 2024 1 commit
  3. 24 Jun, 2024 1 commit
  4. 13 Jun, 2024 1 commit
  5. 05 Jun, 2024 1 commit
    • Tolga Cangöz's avatar
      Errata (#8322) · 98730c5d
      Tolga Cangöz authored
      * Fix typos
      
      * Trim trailing whitespaces
      
      * Remove a trailing whitespace
      
      * chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0
      
      * Revert "chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0"
      
      This reverts commit fd742b30b4258106008a6af4d0dd4664904f8595.
      
      * pokemon -> naruto
      
      * `DPMSolverMultistep` -> `DPMSolverMultistepScheduler`
      
      * Improve Markdown stylization
      
      * Improve style
      
      * Improve style
      
      * Refactor pipeline variable names for consistency
      
      * up style
      98730c5d
  6. 29 May, 2024 1 commit
  7. 16 May, 2024 1 commit
  8. 10 May, 2024 1 commit
    • Mark Van Aken's avatar
      #7535 Update FloatTensor type hints to Tensor (#7883) · be4afa0b
      Mark Van Aken authored
      * find & replace all FloatTensors to Tensor
      
      * apply formatting
      
      * Update torch.FloatTensor to torch.Tensor in the remaining files
      
      * formatting
      
      * Fix the rest of the places where FloatTensor is used as well as in documentation
      
      * formatting
      
      * Update new file from FloatTensor to Tensor
      be4afa0b
  9. 07 May, 2024 1 commit
  10. 11 Apr, 2024 1 commit
  11. 02 Apr, 2024 1 commit
    • Bagheera's avatar
      7529 do not disable autocast for cuda devices (#7530) · 8e963d1c
      Bagheera authored
      
      
      * 7529 do not disable autocast for cuda devices
      
      * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
      
      * add autocast fix to other training examples
      
      * disable native_amp for dreambooth (sdxl)
      
      * disable native_amp for pix2pix (sdxl)
      
      * remove tests from remaining files
      
      * disable native_amp on huggingface accelerator for every training example that uses it
      
      * convert more usages of autocast to nullcontext, make style fixes
      
      * make style fixes
      
      * style.
      
      * Empty-Commit
      
      ---------
      Co-authored-by: default avatarbghira <bghira@users.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e963d1c
  12. 18 Mar, 2024 1 commit
  13. 13 Mar, 2024 1 commit
  14. 09 Feb, 2024 2 commits
  15. 08 Feb, 2024 2 commits
  16. 15 Jan, 2024 1 commit
  17. 11 Jan, 2024 1 commit
  18. 05 Jan, 2024 2 commits
    • Sayak Paul's avatar
      0.25.0 post release (#6358) · 9d945b2b
      Sayak Paul authored
      
      
      * post release
      
      * style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      9d945b2b
    • dg845's avatar
      Improve LCM(-LoRA) Distillation Scripts (#6420) · f3d1333e
      dg845 authored
      * Make WDS pipeline interpolation type configurable.
      
      * Make the VAE encoding batch size configurable.
      
      * Make lora_alpha and lora_dropout configurable for LCM LoRA scripts.
      
      * Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable.
      
      * Make LoRA target modules configurable for LCM-LoRA scripts.
      
      * Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script.
      
      * apply suggestions from review
      f3d1333e
  19. 28 Dec, 2023 1 commit
  20. 27 Dec, 2023 2 commits
  21. 26 Dec, 2023 1 commit
    • Sayak Paul's avatar
      [Training] Add `datasets` version of LCM LoRA SDXL (#5778) · 6683f979
      Sayak Paul authored
      * add: script to train lcm lora for sdxl with 🤗
      
       datasets
      
      * suit up the args.
      
      * remove comments.
      
      * fix num_update_steps
      
      * fix batch unmarshalling
      
      * fix num_update_steps_per_epoch
      
      * fix; dataloading.
      
      * fix microconditions.
      
      * unconditional predictions debug
      
      * fix batch size.
      
      * no need to use use_auth_token
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * make vae encoding batch size an arg
      
      * final serialization in kohya
      
      * style
      
      * state dict rejigging
      
      * feat: no separate teacher unet.
      
      * debug
      
      * fix state dict serialization
      
      * debug
      
      * debug
      
      * debug
      
      * remove prints.
      
      * remove kohya utility and make style
      
      * fix serialization
      
      * fix
      
      * add test
      
      * add peft dependency.
      
      * add: peft
      
      * remove peft
      
      * autocast device determination from accelerator
      
      * autocast
      
      * reduce lora rank.
      
      * remove unneeded space
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * style
      
      * remove prompt dropout.
      
      * also save in native diffusers ckpt format.
      
      * debug
      
      * debug
      
      * debug
      
      * better formation of the null embeddings.
      
      * remove space.
      
      * autocast fixes.
      
      * autocast fix.
      
      * hacky
      
      * remove lora_sayak
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      
      * style
      
      * make log validation leaner.
      
      * move back enabled in.
      
      * fix: log_validation call.
      
      * add: checkpointing tests
      
      * taking my chances to see if disabling autocasting has any effect?
      
      * start debugging
      
      * name
      
      * name
      
      * name
      
      * more debug
      
      * more debug
      
      * index
      
      * remove index.
      
      * print length
      
      * print length
      
      * print length
      
      * move unet.train() after add_adapter()
      
      * disable some prints.
      
      * enable_adapters() manually.
      
      * remove prints.
      
      * some changes.
      
      * fix params_to_optimize
      
      * more fixes
      
      * debug
      
      * debug
      
      * remove print
      
      * disable grad for certain contexts.
      
      * Add support for IPAdapterFull (#5911)
      
      * Add support for IPAdapterFull
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix a bug in `add_noise` function  (#6085)
      
      * fix
      
      * copies
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      
      * [Advanced Diffusion Script] Add Widget default text (#6100)
      
      add widget
      
      * [Advanced Training Script] Fix pipe example (#6106)
      
      * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901)
      
      * adapter for StableDiffusionControlNetImg2ImgPipeline
      
      * fix-copies
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * IP adapter support for most pipelines (#5900)
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
      
      * update tests
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
      
      * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
      
      * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
      
      * revert changes to sd_attend_and_excite and sd_upscale
      
      * make style
      
      * fix broken tests
      
      * update ip-adapter implementation to latest
      
      * apply suggestions from review
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix: lora_alpha
      
      * make vae casting conditional/
      
      * param upcasting
      
      * propagate comments from https://github.com/huggingface/diffusers/pull/6145
      
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      
      * [Peft] fix saving / loading when unet is not "unet" (#6046)
      
      * [Peft] fix saving / loading when unet is not "unet"
      
      * Update src/diffusers/loaders/lora.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * undo stablediffusion-xl changes
      
      * use unet_name to get unet for lora helpers
      
      * use unet_name
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [Wuerstchen] fix fp16 training and correct lora args (#6245)
      
      fix fp16 training
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * [docs] fix: animatediff docs (#6339)
      
      fix: animatediff docs
      
      * add: note about the new script in readme_sdxl.
      
      * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)"
      
      This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40.
      
      * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)"
      
      This reverts commit 0bb9cf0216e501632677895de6574532092282b5.
      
      * Revert "[docs] fix: animatediff docs (#6339)"
      
      This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627.
      
      * remove tokenize_prompt().
      
      * assistive comments around enable_adapters() and diable_adapters().
      
      ---------
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarFabio Rigano <57982783+fabiorigano@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avatarCharchit Sharma <charchitsharma11@gmail.com>
      Co-authored-by: default avatarAryan V S <contact.aryanvs@gmail.com>
      Co-authored-by: default avatardg845 <dgu8957@gmail.com>
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      6683f979
  22. 25 Dec, 2023 1 commit
  23. 15 Dec, 2023 1 commit
  24. 06 Dec, 2023 2 commits
  25. 01 Dec, 2023 1 commit
  26. 27 Nov, 2023 1 commit
  27. 09 Nov, 2023 1 commit