- 22 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * initial commit * initial commit * initial commit * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> * move prompt embeds, pooled embeds outside * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * fix import * fix import and tokenizer 4, text encoder 4 loading * te * prompt embeds * fix naming * shapes * initial commit to add HiDreamImageLoraLoaderMixin * fix init * add tests * loader * fix model input * add code example to readme * fix default max length of text encoders * prints * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training * smol fix * unpatchify * unpatchify * fix validation * flip pred and loss * fix shift!!! * revert unpatchify changes (for now) * smol fix * Apply style fixes * workaround moe training * workaround moe training * remove prints * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae) https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207 * refactor to align with HiDream refactor * refactor to align with HiDream refactor * refactor to align with HiDream refactor * add support for cpu offloading of text encoders * Apply style fixes * adjust lr and rank for train example * fix copies * Apply style fixes * update README * update README * update README * fix license * keep prompt2,3,4 as None in validation * remove reverse ode comment * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * vae offload change * fix text encoder offloading * Apply style fixes * cleaner to_kwargs * fix module name in copied from * add requirements * fix offloading * fix offloading * fix offloading * update transformers version in reqs * try AutoTokenizer * try AutoTokenizer * Apply style fixes * empty commit * Delete tests/lora/test_lora_layers_hidream.py * change tokenizer_4 to load with AutoTokenizer as well * make text_encoder_four and tokenizer_four configurable * save model card * save model card * revert T5 * fix test * remove non diffusers lumina2 conversion --------- Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 21 Apr, 2025 2 commits
-
-
Kenneth Gerald Hamilton authored
[train_dreambooth_lora_sdxl.py] Fix the LR Schedulers when num_train_epochs is passed in a distributed training env (#11240) Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Linoy Tsaban authored
* add fix * add fix * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 15 Apr, 2025 1 commit
-
-
Sayak Paul authored
* post release * update * fix deprecations * remaining * update --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 09 Apr, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update
-
- 08 Apr, 2025 2 commits
-
-
Linoy Tsaban authored
* remove custom scheduler * update requirements.txt * log_validation with mixed precision * add intermediate embeddings saving when checkpointing is enabled * remove comment * fix validation * add unwrap_model for accelerator, torch.no_grad context for validation, fix accelerator.accumulate call in advanced script * revert unwrap_model change temp * add .module to address distributed training bug + replace accelerator.unwrap_model with unwrap model * changes to align advanced script with canonical script * make changes for distributed training + unify unwrap_model calls in advanced script * add module.dtype fix to dreambooth script * unify unwrap_model calls in dreambooth script * fix condition in validation run * mixed precision * Update examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * smol style change * change autocast * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Álvaro Somoza authored
* initial * Update examples/dreambooth/train_dreambooth_lora_sdxl.py Co-authored-by:
hlky <hlky@hlky.ac> * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 06 Mar, 2025 1 commit
-
-
Jun Yeop Na authored
[train_dreambooth_lora.py] Fix the LR Schedulers when `num_train_epochs` is passed in a distributed training env (#10973) * updated train_dreambooth_lora to fix the LR schedulers for `num_train_epochs` in distributed training env * fixed formatting * remove trailing newlines * fixed style error
-
- 04 Mar, 2025 1 commit
-
-
Alexey Zolotenkov authored
* Fix seed initialization to handle args.seed = 0 correctly * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 24 Feb, 2025 1 commit
-
-
SahilCarterr authored
Fix fp16 bug Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Feb, 2025 1 commit
-
-
Sayak Paul authored
* feat: lora support for Lumina2. * fix-copies. * updates * updates * docs. * fix * add: training script. * tests * updates * updates * major updates. * updates * fixes * docs. * updates * updates
-
- 06 Feb, 2025 1 commit
-
-
Leo Jiang authored
* NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * [bugfix]NPU Adaption for Sanna --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Jan, 2025 1 commit
-
-
hlky authored
* [training] Convert to ImageFolder script * make
-
- 24 Jan, 2025 1 commit
-
-
Leo Jiang authored
* NPU Adaption for Sanna --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Jan, 2025 3 commits
-
-
Muyang Li authored
Remove the FP32 Wrapper Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
jiqing-feng authored
* enable dreambooth_lora on other devices Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * enable xpu Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * check cuda device before empty cache Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * fix comment Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * import free_memory Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> --------- Signed-off-by:
jiqing-feng <jiqing.feng@intel.com>
-
Sayak Paul authored
change licensing to 2025 from 2024.
-
- 15 Jan, 2025 1 commit
-
-
Leo Jiang authored
Co-authored-by:J石页 <jiangshuo9@h-partners.com>
-
- 30 Dec, 2024 1 commit
-
-
Sayak Paul authored
* add ds support to lora sd3. Co-authored-by:
leisuzz <jiangshuonb@gmail.com> * style. --------- Co-authored-by:
leisuzz <jiangshuonb@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 23 Dec, 2024 2 commits
-
-
Sayak Paul authored
* post release 0.32.0 * stylew
-
Sayak Paul authored
* sana lora training tests and misc. * remove push to hub * Update examples/dreambooth/train_dreambooth_lora_sana.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 19 Dec, 2024 1 commit
-
-
Sayak Paul authored
Update README_sana.md to update the default model
-
- 18 Dec, 2024 2 commits
-
-
Sayak Paul authored
fix: reamde -> readme
-
Sayak Paul authored
* feat: lora support for SANA. * make fix-copies * rename test class. * attention_kwargs -> cross_attention_kwargs. * Revert "attention_kwargs -> cross_attention_kwargs." This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b. * exhaust 119 max line limit * sana lora fine-tuning script. * readme * add a note about the supported models. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * style * docs for attention_kwargs. * remove lora_scale from pag pipeline. * copy fix --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 12 Dec, 2024 1 commit
-
-
Ethan Smith authored
* fix min-snr implementation https://github.com/kohya-ss/sd-scripts/blob/main/library/custom_train_functions.py#L66 * Update train_dreambooth.py fix variable name mse_loss_weights * fix divisor * make style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 25 Nov, 2024 1 commit
-
-
SkyCol authored
Add files via upload
-
- 24 Nov, 2024 1 commit
-
-
Linoy Tsaban authored
* smol change to fix checkpoint saving & resuming (as done in train_dreambooth_sd3.py) * style * modify comment to explain reasoning behind hidden size check
-
- 19 Nov, 2024 1 commit
-
-
Linoy Tsaban authored
* memory improvement as done here: https://github.com/huggingface/diffusers/pull/9829 * fix bug * fix bug * style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Nov, 2024 1 commit
-
-
SahilCarterr authored
* fix use_dora * fix style and quality * fix use_dora with peft version --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Nov, 2024 1 commit
-
-
SahilCarterr authored
* updated encode prompt and clip encod prompt --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 01 Nov, 2024 3 commits
-
-
Leo Jiang authored
* Improve NPU performance * Improve NPU performance * Improve NPU performance * Improve NPU performance * [bugfix] bugfix for npu free memory * [bugfix] bugfix for npu free memory * [bugfix] bugfix for npu free memory * Reduce memory cost for flux training process --------- Co-authored-by:
蒋硕 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Boseong Jeon authored
Handling mixed precision and add unwarp Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Leo Jiang authored
* NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX * NPU implementation for FLUX --------- Co-authored-by:蒋硕 <jiangshuo9@h-partners.com>
-
- 31 Oct, 2024 1 commit
-
-
Sayak Paul authored
* use the lr when using 8bit adam. * remove lr as we pack it in params_to_optimize. --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 28 Oct, 2024 3 commits
-
-
Linoy Tsaban authored
* make lora target modules configurable and change the default * style * make lora target modules configurable and change the default * fix bug when using prodigy and training te * fix mixed precision training as proposed in https://github.com/huggingface/diffusers/pull/9565 for full dreambooth as well * add test and notes * style * address sayaks comments * style * fix test --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Linoy Tsaban authored
* configurable layers * configurable layers * update README * style * add test * style * add layer test, update readme, add nargs * readme * test style * remove print, change nargs * test arg change * style * revert nargs 2/2 * address sayaks comments * style * address sayaks comments
-
Biswaroop authored
[Fix] remove setting lr for T5 text encoder when using prodigy in flux dreambooth lora script (#9473) * fix: removed setting of text encoder lr for T5 as it's not being tuned * fix: removed setting of text encoder lr for T5 as it's not being tuned --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 25 Oct, 2024 1 commit
-
-
Ina authored
* flux pipline: readability enhancement.
-
- 23 Oct, 2024 1 commit
-
-
Linoy Tsaban authored
* improve readme * style --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 22 Oct, 2024 1 commit
-
-
Sayak Paul authored
* post-release * style
-