• Sayak Paul's avatar
    [Training] Add `datasets` version of LCM LoRA SDXL (#5778) · 6683f979
    Sayak Paul authored
    * add: script to train lcm lora for sdxl with 🤗
    
     datasets
    
    * suit up the args.
    
    * remove comments.
    
    * fix num_update_steps
    
    * fix batch unmarshalling
    
    * fix num_update_steps_per_epoch
    
    * fix; dataloading.
    
    * fix microconditions.
    
    * unconditional predictions debug
    
    * fix batch size.
    
    * no need to use use_auth_token
    
    * Apply suggestions from code review
    Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
    
    * make vae encoding batch size an arg
    
    * final serialization in kohya
    
    * style
    
    * state dict rejigging
    
    * feat: no separate teacher unet.
    
    * debug
    
    * fix state dict serialization
    
    * debug
    
    * debug
    
    * debug
    
    * remove prints.
    
    * remove kohya utility and make style
    
    * fix serialization
    
    * fix
    
    * add test
    
    * add peft dependency.
    
    * add: peft
    
    * remove peft
    
    * autocast device determination from accelerator
    
    * autocast
    
    * reduce lora rank.
    
    * remove unneeded space
    
    * Apply suggestions from code review
    Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
    
    * style
    
    * remove prompt dropout.
    
    * also save in native diffusers ckpt format.
    
    * debug
    
    * debug
    
    * debug
    
    * better formation of the null embeddings.
    
    * remove space.
    
    * autocast fixes.
    
    * autocast fix.
    
    * hacky
    
    * remove lora_sayak
    
    * Apply suggestions from code review
    Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
    
    * style
    
    * make log validation leaner.
    
    * move back enabled in.
    
    * fix: log_validation call.
    
    * add: checkpointing tests
    
    * taking my chances to see if disabling autocasting has any effect?
    
    * start debugging
    
    * name
    
    * name
    
    * name
    
    * more debug
    
    * more debug
    
    * index
    
    * remove index.
    
    * print length
    
    * print length
    
    * print length
    
    * move unet.train() after add_adapter()
    
    * disable some prints.
    
    * enable_adapters() manually.
    
    * remove prints.
    
    * some changes.
    
    * fix params_to_optimize
    
    * more fixes
    
    * debug
    
    * debug
    
    * remove print
    
    * disable grad for certain contexts.
    
    * Add support for IPAdapterFull (#5911)
    
    * Add support for IPAdapterFull
    Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
    
    ---------
    Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
    Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
    
    * Fix a bug in `add_noise` function  (#6085)
    
    * fix
    
    * copies
    
    ---------
    Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
    
    * [Advanced Diffusion Script] Add Widget default text (#6100)
    
    add widget
    
    * [Advanced Training Script] Fix pipe example (#6106)
    
    * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901)
    
    * adapter for StableDiffusionControlNetImg2ImgPipeline
    
    * fix-copies
    
    * fix-copies
    
    ---------
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    
    * IP adapter support for most pipelines (#5900)
    
    * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
    
    * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
    
    * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
    
    * update tests
    
    * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
    
    * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
    
    * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
    
    * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
    
    * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
    
    * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
    
    * revert changes to sd_attend_and_excite and sd_upscale
    
    * make style
    
    * fix broken tests
    
    * update ip-adapter implementation to latest
    
    * apply suggestions from review
    
    ---------
    Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    
    * fix: lora_alpha
    
    * make vae casting conditional/
    
    * param upcasting
    
    * propagate comments from https://github.com/huggingface/diffusers/pull/6145
    
    Co-authored-by: default avatardg845 <dgu8957@gmail.com>
    
    * [Peft] fix saving / loading when unet is not "unet" (#6046)
    
    * [Peft] fix saving / loading when unet is not "unet"
    
    * Update src/diffusers/loaders/lora.py
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    
    * undo stablediffusion-xl changes
    
    * use unet_name to get unet for lora helpers
    
    * use unet_name
    
    ---------
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    
    * [Wuerstchen] fix fp16 training and correct lora args (#6245)
    
    fix fp16 training
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    
    * [docs] fix: animatediff docs (#6339)
    
    fix: animatediff docs
    
    * add: note about the new script in readme_sdxl.
    
    * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)"
    
    This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40.
    
    * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)"
    
    This reverts commit 0bb9cf0216e501632677895de6574532092282b5.
    
    * Revert "[docs] fix: animatediff docs (#6339)"
    
    This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627.
    
    * remove tokenize_prompt().
    
    * assistive comments around enable_adapters() and diable_adapters().
    
    ---------
    Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
    Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
    Co-authored-by: default avatarFabio Rigano <57982783+fabiorigano@users.noreply.github.com>
    Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
    Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
    Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
    Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
    Co-authored-by: default avatarCharchit Sharma <charchitsharma11@gmail.com>
    Co-authored-by: default avatarAryan V S <contact.aryanvs@gmail.com>
    Co-authored-by: default avatardg845 <dgu8957@gmail.com>
    Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
    6683f979
test_lcm_lora.py 4.23 KB