- 24 Jun, 2024 1 commit
-
-
Tolga Cangöz authored
* Fix typos & improve contributing page * `make style && make quality` * fix typos * Fix typo --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 13 Jun, 2024 1 commit
-
-
Sayak Paul authored
post release
-
- 29 May, 2024 2 commits
-
-
Tolga Cangöz authored
* Fix copying mechanism typos * fix copying mecha * Revert, since they are in TODO * Fix copying mechanism
-
Sayak Paul authored
* post release v0.28.0 * style
-
- 20 May, 2024 1 commit
-
-
Sai-Suraj-27 authored
Fixed few docstrings according to the Google Style Guide.
-
- 30 Apr, 2024 1 commit
-
-
Linoy Tsaban authored
* add blora * add blora * add blora * add blora * little changes * little changes * remove redundancies * fixes * add B LoRA to readme * style * inference * defaults + path to loras+ generation * minor changes * style * minor changes * minor changes * blora arg * added --lora_unet_blocks * style * Update examples/advanced_diffusion_training/README.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * add commit hash to B-LoRA repo cloneing * change inference, remove cloning * change inference, remove cloning add section about configureable unet blocks * change inference, remove cloning add section about configureable unet blocks * Apply suggestions from code review --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Apr, 2024 1 commit
-
-
Bagheera authored
* 7529 do not disable autocast for cuda devices * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue * add autocast fix to other training examples * disable native_amp for dreambooth (sdxl) * disable native_amp for pix2pix (sdxl) * remove tests from remaining files * disable native_amp on huggingface accelerator for every training example that uses it * convert more usages of autocast to nullcontext, make style fixes * make style fixes * style. * Empty-Commit --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Mar, 2024 1 commit
-
-
Thomas Liang authored
-
- 26 Mar, 2024 1 commit
-
-
Ernie Chu authored
you cannot specify `type="bool"` and `action="store_true"` at the same time. remove excessive and buggy `type=bool`. Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 18 Mar, 2024 1 commit
-
-
Sayak Paul authored
* post-release * quality
-
- 14 Mar, 2024 1 commit
-
-
Linoy Tsaban authored
* add edm style training * style * finish adding edm training feature * import fix * fix latents mean * minor adjustments * add edm to readme * style * fix autocast and scheduler config issues when using edm * style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 13 Mar, 2024 1 commit
-
-
Sayak Paul authored
switch to logger.warning
-
- 06 Mar, 2024 1 commit
-
-
Nate Landman authored
adding the type gives you ``` TypeError: _StoreTrueAction.__init__() got an unexpected keyword argument 'type' ```
-
- 04 Mar, 2024 2 commits
-
-
Linoy Tsaban authored
* add tags for diffusers training * add tags for diffusers training * add tags for diffusers training * add tags for diffusers training * add tags for diffusers training * add tags for diffusers training * add dora tags for drambooth lora scripts * style
-
Linoy Tsaban authored
* add is_dora arg * style * add dora training feature to sd 1.5 script * added notes about DoRA training --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Feb, 2024 1 commit
-
-
Linoy Tsaban authored
* fix bug in micro-conditioning of class images * fix bug in micro-conditioning of class images * style
-
- 09 Feb, 2024 2 commits
-
-
Sayak Paul authored
* post release * style * Empty-Commit --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 03 Feb, 2024 1 commit
-
-
Linoy Tsaban authored
* add noise_offset param * micro conditioning - wip * image processing adjusted and moved to support micro conditioning * change time ids to be computed inside train loop * change time ids to be computed inside train loop * change time ids to be computed inside train loop * time ids shape fix * move token replacement of validation prompt to the same section of instance prompt and class prompt * add offset noise to sd15 advanced script * fix token loading during validation * fix token loading during validation in sdxl script * a little clean * style * a little clean * style * sdxl script - a little clean + minor path fix sd 1.5 script - change default resolution value * ad 1.5 script - minor path fix * fix missing comma in code example in model card * clean up commented lines * style * remove time ids computed outside training loop - no longer used now that we utilize micro-conditioning, as all time ids are now computed inside the training loop * style * [WIP] - added draft readme, building off of examples/dreambooth/README.md * readme * readme * readme * readme * readme * readme * readme * readme * removed --crops_coords_top_left from CLI args * style * fix missing shape bug due to missing RGB if statement * add blog mention at the start of the reamde as well * Update examples/advanced_diffusion_training/README.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * change note to render nicely as well --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Jan, 2024 1 commit
-
-
Brandon Strong authored
* sd1.5 support in separate script A quick adaptation to support people interested in using this method on 1.5 models. * sd15 prompt text encoding and unet conversions as per @linoytsaban 's recommendations. Testing would be appreciated, * Readability and quality improvements Removed some mentions of SDXL, and some arguments that don't apply to sd 1.5, and cleaned up some comments. * make style/quality commands * tracker rename and run-it doc * Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py * Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 17 Jan, 2024 2 commits
-
-
Linoy Tsaban authored
* fixes bugs: 1. redundant retraction 2. param clone 3. stopping optimization of text encoder params * param upscaling * style
-
Steve Rhoades authored
resolve conflicts
-
- 16 Jan, 2024 1 commit
-
-
Steve Rhoades authored
* Fixes #6418 Advanced Dreambooth LoRa Training * change order of import to fix nit * fix nit, use cast_training_params * remove torch.compile fix, will move to a new PR * remove unnecessary import
-
- 05 Jan, 2024 2 commits
-
-
Sayak Paul authored
* post release * style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Linoy Tsaban authored
* unwrap text encoder when saving hook only for full text encoder tuning * unwrap text encoder when saving hook only for full text encoder tuning * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * save embeddings in each checkpoint as well * Update examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Jan, 2024 1 commit
-
-
Linoy Tsaban authored
[bug fix] using snr gamma and prior preservation loss in the dreambooth lora sdxl training scripts (#6356) * change timesteps used to calculate snr when --with_prior_preservation is enabled * change timesteps used to calculate snr when --with_prior_preservation is enabled (canonical script) * style * revert canonical script to before snr gamma change --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 30 Dec, 2023 1 commit
-
-
apolinário authored
* Add WebUI format support to Advanced Training Script * style --------- Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
- 27 Dec, 2023 2 commits
-
-
apolinário authored
fix keys for lora format on advanced training scripts
-
apolinário authored
* Fix ProdigyOPT in SDXL Dreambooth script * style * style * Add PEFT to Advanced Training Script * style * style *
✨ style✨ * change order for logic operation * add lora alpha * style * Align PEFT to new format * Update train_dreambooth_lora_sdxl_advanced.py Apply #6355 fix --------- Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
- 26 Dec, 2023 1 commit
-
-
Sayak Paul authored
* add: script to train lcm lora for sdxl with
🤗 datasets * suit up the args. * remove comments. * fix num_update_steps * fix batch unmarshalling * fix num_update_steps_per_epoch * fix; dataloading. * fix microconditions. * unconditional predictions debug * fix batch size. * no need to use use_auth_token * Apply suggestions from code review Co-authored-by:Suraj Patil <surajp815@gmail.com> * make vae encoding batch size an arg * final serialization in kohya * style * state dict rejigging * feat: no separate teacher unet. * debug * fix state dict serialization * debug * debug * debug * remove prints. * remove kohya utility and make style * fix serialization * fix * add test * add peft dependency. * add: peft * remove peft * autocast device determination from accelerator * autocast * reduce lora rank. * remove unneeded space * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * style * remove prompt dropout. * also save in native diffusers ckpt format. * debug * debug * debug * better formation of the null embeddings. * remove space. * autocast fixes. * autocast fix. * hacky * remove lora_sayak * Apply suggestions from code review Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> * style * make log validation leaner. * move back enabled in. * fix: log_validation call. * add: checkpointing tests * taking my chances to see if disabling autocasting has any effect? * start debugging * name * name * name * more debug * more debug * index * remove index. * print length * print length * print length * move unet.train() after add_adapter() * disable some prints. * enable_adapters() manually. * remove prints. * some changes. * fix params_to_optimize * more fixes * debug * debug * remove print * disable grad for certain contexts. * Add support for IPAdapterFull (#5911) * Add support for IPAdapterFull Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Fix a bug in `add_noise` function (#6085) * fix * copies --------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> * [Advanced Diffusion Script] Add Widget default text (#6100) add widget * [Advanced Training Script] Fix pipe example (#6106) * IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901) * adapter for StableDiffusionControlNetImg2ImgPipeline * fix-copies * fix-copies --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * IP adapter support for most pipelines (#5900) * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py * update tests * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py * support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py * support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py * revert changes to sd_attend_and_excite and sd_upscale * make style * fix broken tests * update ip-adapter implementation to latest * apply suggestions from review --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix: lora_alpha * make vae casting conditional/ * param upcasting * propagate comments from https://github.com/huggingface/diffusers/pull/6145 Co-authored-by:
dg845 <dgu8957@gmail.com> * [Peft] fix saving / loading when unet is not "unet" (#6046) * [Peft] fix saving / loading when unet is not "unet" * Update src/diffusers/loaders/lora.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * undo stablediffusion-xl changes * use unet_name to get unet for lora helpers * use unet_name --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * [Wuerstchen] fix fp16 training and correct lora args (#6245) fix fp16 training Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * [docs] fix: animatediff docs (#6339) fix: animatediff docs * add: note about the new script in readme_sdxl. * Revert "[Peft] fix saving / loading when unet is not "unet" (#6046)" This reverts commit 4c7e983bb5929320bab08d70333eeb93f047de40. * Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245)" This reverts commit 0bb9cf0216e501632677895de6574532092282b5. * Revert "[docs] fix: animatediff docs (#6339)" This reverts commit 11659a6f74b5187f601eeeeeb6f824dda73d0627. * remove tokenize_prompt(). * assistive comments around enable_adapters() and diable_adapters(). --------- Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> Co-authored-by:
Fabio Rigano <57982783+fabiorigano@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> Co-authored-by:
Charchit Sharma <charchitsharma11@gmail.com> Co-authored-by:
Aryan V S <contact.aryanvs@gmail.com> Co-authored-by:
dg845 <dgu8957@gmail.com> Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com>
-
- 14 Dec, 2023 1 commit
-
-
Linoy Tsaban authored
[advanced dreambooth lora sdxl training script] load pipeline for inference only if validation prompt is used (#6171) * load pipeline for inference only if validation prompt is used * move things outside * load pipeline for inference only if validation prompt is used * fix readme when validation prompt is used --------- Co-authored-by:
linoytsaban <linoy@huggingface.co> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com>
-
- 08 Dec, 2023 2 commits
-
-
apolinário authored
-
apolinário authored
add widget
-
- 06 Dec, 2023 1 commit
-
-
apolinário authored
* add cache latents * style
-
- 05 Dec, 2023 1 commit
-
-
apolinário authored
* Update train_dreambooth_lora_sdxl_advanced.py * remove global function args from dreamboothdataset class * style * style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 04 Dec, 2023 3 commits
-
-
Linoy Tsaban authored
* improve help tags * style fix * changes token_abstraction type to string. support multiple concepts for pivotal using a comma separated string. * style fixup * changed logger to warning (not yet available) * moved the token_abstraction parsing to be in the same block as where we create the mapping of identifier to token --------- Co-authored-by:Linoy <linoy@huggingface.co>
-
Levi McCallum authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Linoy Tsaban authored
* improve help tags * style fix --------- Co-authored-by:Linoy <linoy@huggingface.co>
-
- 01 Dec, 2023 1 commit
-
-
Patrick von Platen authored
* Post Release: v0.24.0 * post pone deprecation * post pone deprecation * Add model_index.json
-