- 05 May, 2025 3 commits
-
-
Sayak Paul authored
* feat: enable quantization for hidream lora training. * better handle compute dtype. * finalize. * fix dtype. --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Evan Han authored
[train_dreambooth_lora_lumina2] Add LANCZOS as the default interpolation mode for image resizing (#11491) [ADD] interpolation
-
MinJu-Ha authored
[train_dreambooth_lora_sdxl] Add --image_interpolation_mode option for image resizing (default to lanczos) (#11490) feat(train_dreambooth_lora_sdxl): support --image_interpolation_mode with default to lanczos
-
- 01 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in docs and comments * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 28 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
remove unnecessary pipeline moving to cpu in validation Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 26 Apr, 2025 1 commit
-
-
Mert Erbak authored
* Set LANCZOS as default interpolation mode for resizing * [train_dreambooth_lora.py] Set LANCZOS as default interpolation mode for resizing
-
- 24 Apr, 2025 2 commits
-
-
co63oc authored
-
Linoy Tsaban authored
* 1. add pre-computation of prompt embeddings when custom prompts are used as well 2. save model card even if model is not pushed to hub 3. remove scheduler initialization from code example - not necessary anymore (it's now if the base model's config) 4. add skip_final_inference - to allow to run with validation, but skip the final loading of the pipeline with the lora weights to reduce memory reqs * pre encode validation prompt as well * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * pre encode validation prompt as well * Apply style fixes * empty commit * change default trained modules * empty commit * address comments + change encoding of validation prompt (before it was only pre-encoded if custom prompts are provided, but should be pre-encoded either way) * Apply style fixes * empty commit * fix validation_embeddings definition * fix final inference condition * fix pipeline deletion in last inference * Apply style fixes * empty commit * layers * remove readme remarks on only pre-computing when instance prompt is provided and change example to 3d icons * smol fix * empty commit --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 23 Apr, 2025 2 commits
-
-
Ishan Dutta authored
-
Ameer Azam authored
Small change requirements_sana.txt to requirements_hidream.txt
-
- 22 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * initial commit * initial commit * initial commit * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> * move prompt embeds, pooled embeds outside * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * fix import * fix import and tokenizer 4, text encoder 4 loading * te * prompt embeds * fix naming * shapes * initial commit to add HiDreamImageLoraLoaderMixin * fix init * add tests * loader * fix model input * add code example to readme * fix default max length of text encoders * prints * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training * smol fix * unpatchify * unpatchify * fix validation * flip pred and loss * fix shift!!! * revert unpatchify changes (for now) * smol fix * Apply style fixes * workaround moe training * workaround moe training * remove prints * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae) https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207 * refactor to align with HiDream refactor * refactor to align with HiDream refactor * refactor to align with HiDream refactor * add support for cpu offloading of text encoders * Apply style fixes * adjust lr and rank for train example * fix copies * Apply style fixes * update README * update README * update README * fix license * keep prompt2,3,4 as None in validation * remove reverse ode comment * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * vae offload change * fix text encoder offloading * Apply style fixes * cleaner to_kwargs * fix module name in copied from * add requirements * fix offloading * fix offloading * fix offloading * update transformers version in reqs * try AutoTokenizer * try AutoTokenizer * Apply style fixes * empty commit * Delete tests/lora/test_lora_layers_hidream.py * change tokenizer_4 to load with AutoTokenizer as well * make text_encoder_four and tokenizer_four configurable * save model card * save model card * revert T5 * fix test * remove non diffusers lumina2 conversion --------- Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 21 Apr, 2025 2 commits
-
-
Kenneth Gerald Hamilton authored
[train_dreambooth_lora_sdxl.py] Fix the LR Schedulers when num_train_epochs is passed in a distributed training env (#11240) Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
Linoy Tsaban authored
* add fix * add fix * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 15 Apr, 2025 1 commit
-
-
Sayak Paul authored
* post release * update * fix deprecations * remaining * update --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 09 Apr, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update
-
- 08 Apr, 2025 2 commits
-
-
Linoy Tsaban authored
* remove custom scheduler * update requirements.txt * log_validation with mixed precision * add intermediate embeddings saving when checkpointing is enabled * remove comment * fix validation * add unwrap_model for accelerator, torch.no_grad context for validation, fix accelerator.accumulate call in advanced script * revert unwrap_model change temp * add .module to address distributed training bug + replace accelerator.unwrap_model with unwrap model * changes to align advanced script with canonical script * make changes for distributed training + unify unwrap_model calls in advanced script * add module.dtype fix to dreambooth script * unify unwrap_model calls in dreambooth script * fix condition in validation run * mixed precision * Update examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * smol style change * change autocast * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Álvaro Somoza authored
* initial * Update examples/dreambooth/train_dreambooth_lora_sdxl.py Co-authored-by:
hlky <hlky@hlky.ac> * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 06 Mar, 2025 1 commit
-
-
Jun Yeop Na authored
[train_dreambooth_lora.py] Fix the LR Schedulers when `num_train_epochs` is passed in a distributed training env (#10973) * updated train_dreambooth_lora to fix the LR schedulers for `num_train_epochs` in distributed training env * fixed formatting * remove trailing newlines * fixed style error
-
- 04 Mar, 2025 1 commit
-
-
Alexey Zolotenkov authored
* Fix seed initialization to handle args.seed = 0 correctly * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 24 Feb, 2025 1 commit
-
-
SahilCarterr authored
Fix fp16 bug Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Feb, 2025 1 commit
-
-
Sayak Paul authored
* feat: lora support for Lumina2. * fix-copies. * updates * updates * docs. * fix * add: training script. * tests * updates * updates * major updates. * updates * fixes * docs. * updates * updates
-
- 06 Feb, 2025 1 commit
-
-
Leo Jiang authored
* NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * [bugfix]NPU Adaption for Sanna --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Jan, 2025 1 commit
-
-
hlky authored
* [training] Convert to ImageFolder script * make
-
- 24 Jan, 2025 1 commit
-
-
Leo Jiang authored
* NPU Adaption for Sanna --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Jan, 2025 3 commits
-
-
Muyang Li authored
Remove the FP32 Wrapper Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
jiqing-feng authored
* enable dreambooth_lora on other devices Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * enable xpu Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * check cuda device before empty cache Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * fix comment Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * import free_memory Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> --------- Signed-off-by:
jiqing-feng <jiqing.feng@intel.com>
-
Sayak Paul authored
change licensing to 2025 from 2024.
-
- 15 Jan, 2025 1 commit
-
-
Leo Jiang authored
Co-authored-by:J石页 <jiangshuo9@h-partners.com>
-
- 30 Dec, 2024 1 commit
-
-
Sayak Paul authored
* add ds support to lora sd3. Co-authored-by:
leisuzz <jiangshuonb@gmail.com> * style. --------- Co-authored-by:
leisuzz <jiangshuonb@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 23 Dec, 2024 2 commits
-
-
Sayak Paul authored
* post release 0.32.0 * stylew
-
Sayak Paul authored
* sana lora training tests and misc. * remove push to hub * Update examples/dreambooth/train_dreambooth_lora_sana.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 19 Dec, 2024 1 commit
-
-
Sayak Paul authored
Update README_sana.md to update the default model
-
- 18 Dec, 2024 2 commits
-
-
Sayak Paul authored
fix: reamde -> readme
-
Sayak Paul authored
* feat: lora support for SANA. * make fix-copies * rename test class. * attention_kwargs -> cross_attention_kwargs. * Revert "attention_kwargs -> cross_attention_kwargs." This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b. * exhaust 119 max line limit * sana lora fine-tuning script. * readme * add a note about the supported models. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * style * docs for attention_kwargs. * remove lora_scale from pag pipeline. * copy fix --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 12 Dec, 2024 1 commit
-
-
Ethan Smith authored
* fix min-snr implementation https://github.com/kohya-ss/sd-scripts/blob/main/library/custom_train_functions.py#L66 * Update train_dreambooth.py fix variable name mse_loss_weights * fix divisor * make style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 25 Nov, 2024 1 commit
-
-
SkyCol authored
Add files via upload
-
- 24 Nov, 2024 1 commit
-
-
Linoy Tsaban authored
* smol change to fix checkpoint saving & resuming (as done in train_dreambooth_sd3.py) * style * modify comment to explain reasoning behind hidden size check
-
- 19 Nov, 2024 1 commit
-
-
Linoy Tsaban authored
* memory improvement as done here: https://github.com/huggingface/diffusers/pull/9829 * fix bug * fix bug * style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Nov, 2024 1 commit
-
-
SahilCarterr authored
* fix use_dora * fix style and quality * fix use_dora with peft version --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Nov, 2024 1 commit
-
-
SahilCarterr authored
* updated encode prompt and clip encod prompt --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-