- 08 May, 2025 1 commit
-
-
Linoy Tsaban authored
* add lora_alpha and lora_dropout * Apply style fixes * add lora_alpha and lora_dropout * Apply style fixes * revert lora_alpha until #11324 is merged * Apply style fixes * empty commit --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 05 May, 2025 1 commit
-
-
Sayak Paul authored
* feat: enable quantization for hidream lora training. * better handle compute dtype. * finalize. * fix dtype. --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 01 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in docs and comments * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 28 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
remove unnecessary pipeline moving to cpu in validation Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
* 1. add pre-computation of prompt embeddings when custom prompts are used as well 2. save model card even if model is not pushed to hub 3. remove scheduler initialization from code example - not necessary anymore (it's now if the base model's config) 4. add skip_final_inference - to allow to run with validation, but skip the final loading of the pipeline with the lora weights to reduce memory reqs * pre encode validation prompt as well * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * pre encode validation prompt as well * Apply style fixes * empty commit * change default trained modules * empty commit * address comments + change encoding of validation prompt (before it was only pre-encoded if custom prompts are provided, but should be pre-encoded either way) * Apply style fixes * empty commit * fix validation_embeddings definition * fix final inference condition * fix pipeline deletion in last inference * Apply style fixes * empty commit * layers * remove readme remarks on only pre-computing when instance prompt is provided and change example to 3d icons * smol fix * empty commit --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 22 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * initial commit * initial commit * initial commit * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> * move prompt embeds, pooled embeds outside * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * fix import * fix import and tokenizer 4, text encoder 4 loading * te * prompt embeds * fix naming * shapes * initial commit to add HiDreamImageLoraLoaderMixin * fix init * add tests * loader * fix model input * add code example to readme * fix default max length of text encoders * prints * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training * smol fix * unpatchify * unpatchify * fix validation * flip pred and loss * fix shift!!! * revert unpatchify changes (for now) * smol fix * Apply style fixes * workaround moe training * workaround moe training * remove prints * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae) https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207 * refactor to align with HiDream refactor * refactor to align with HiDream refactor * refactor to align with HiDream refactor * add support for cpu offloading of text encoders * Apply style fixes * adjust lr and rank for train example * fix copies * Apply style fixes * update README * update README * update README * fix license * keep prompt2,3,4 as None in validation * remove reverse ode comment * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * vae offload change * fix text encoder offloading * Apply style fixes * cleaner to_kwargs * fix module name in copied from * add requirements * fix offloading * fix offloading * fix offloading * update transformers version in reqs * try AutoTokenizer * try AutoTokenizer * Apply style fixes * empty commit * Delete tests/lora/test_lora_layers_hidream.py * change tokenizer_4 to load with AutoTokenizer as well * make text_encoder_four and tokenizer_four configurable * save model card * save model card * revert T5 * fix test * remove non diffusers lumina2 conversion --------- Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 15 Apr, 2025 1 commit
-
-
Sayak Paul authored
* post release * update * fix deprecations * remaining * update --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 09 Apr, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update
-
- 04 Mar, 2025 1 commit
-
-
Alexey Zolotenkov authored
* Fix seed initialization to handle args.seed = 0 correctly * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 20 Feb, 2025 1 commit
-
-
Sayak Paul authored
* feat: lora support for Lumina2. * fix-copies. * updates * updates * docs. * fix * add: training script. * tests * updates * updates * major updates. * updates * fixes * docs. * updates * updates
-
- 06 Feb, 2025 1 commit
-
-
Leo Jiang authored
* NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * [bugfix]NPU Adaption for Sanna --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Jan, 2025 1 commit
-
-
Leo Jiang authored
* NPU Adaption for Sanna --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Jan, 2025 1 commit
-
-
Sayak Paul authored
change licensing to 2025 from 2024.
-
- 15 Jan, 2025 1 commit
-
-
Leo Jiang authored
Co-authored-by:J石页 <jiangshuo9@h-partners.com>
-
- 23 Dec, 2024 2 commits
-
-
Sayak Paul authored
* post release 0.32.0 * stylew
-
Sayak Paul authored
* sana lora training tests and misc. * remove push to hub * Update examples/dreambooth/train_dreambooth_lora_sana.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 18 Dec, 2024 1 commit
-
-
Sayak Paul authored
* feat: lora support for SANA. * make fix-copies * rename test class. * attention_kwargs -> cross_attention_kwargs. * Revert "attention_kwargs -> cross_attention_kwargs." This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b. * exhaust 119 max line limit * sana lora fine-tuning script. * readme * add a note about the supported models. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * style * docs for attention_kwargs. * remove lora_scale from pag pipeline. * copy fix --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 19 Nov, 2024 1 commit
-
-
Linoy Tsaban authored
* memory improvement as done here: https://github.com/huggingface/diffusers/pull/9829 * fix bug * fix bug * style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 01 Nov, 2024 2 commits
-
-
Leo Jiang authored
* Improve NPU performance * Improve NPU performance * Improve NPU performance * Improve NPU performance * [bugfix] bugfix for npu free memory * [bugfix] bugfix for npu free memory * [bugfix] bugfix for npu free memory * Reduce memory cost for flux training process --------- Co-authored-by:
蒋硕 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Boseong Jeon authored
Handling mixed precision and add unwarp Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 31 Oct, 2024 1 commit
-
-
Sayak Paul authored
* use the lr when using 8bit adam. * remove lr as we pack it in params_to_optimize. --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 28 Oct, 2024 2 commits
-
-
Linoy Tsaban authored
* make lora target modules configurable and change the default * style * make lora target modules configurable and change the default * fix bug when using prodigy and training te * fix mixed precision training as proposed in https://github.com/huggingface/diffusers/pull/9565 for full dreambooth as well * add test and notes * style * address sayaks comments * style * fix test --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Biswaroop authored
[Fix] remove setting lr for T5 text encoder when using prodigy in flux dreambooth lora script (#9473) * fix: removed setting of text encoder lr for T5 as it's not being tuned * fix: removed setting of text encoder lr for T5 as it's not being tuned --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 25 Oct, 2024 1 commit
-
-
Ina authored
* flux pipline: readability enhancement.
-
- 22 Oct, 2024 1 commit
-
-
Sayak Paul authored
* post-release * style
-
- 15 Oct, 2024 1 commit
-
-
0x名無し authored
* fixed issue #9350, Tensor is deprecated * ran make style
-
- 28 Sep, 2024 1 commit
-
-
Sayak Paul authored
* fix: retain memory utility. * fix * quality * free_memory.
-
- 15 Sep, 2024 1 commit
-
-
Linoy Tsaban authored
* add ostris trainer to README & add cache latents of vae * add ostris trainer to README & add cache latents of vae * style * readme * add test for latent caching * add ostris noise scheduler https://github.com/ostris/ai-toolkit/blob/9ee1ef2a0a2a9a02b92d114a95f21312e5906e54/toolkit/samplers/custom_flowmatch_sampler.py#L95 * style * fix import * style * fix tests * style * --change upcasting of transformer? * update readme according to main * keep only latent caching * add configurable param for final saving of trained layers- --upcast_before_saving * style * Update examples/dreambooth/README_flux.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/README_flux.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * use clear_objs_and_retain_memory from utilities * style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 14 Sep, 2024 1 commit
-
-
Leo Jiang authored
* Fix dtype error * [bugfix] Fixed the issue on sd3 dreambooth training * [bugfix] Fixed the issue on sd3 dreambooth training --------- Co-authored-by:
蒋硕 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 11 Sep, 2024 1 commit
-
-
Sayak Paul authored
fix some fast gpu tests.
-
- 14 Aug, 2024 1 commit
-
-
Álvaro Somoza authored
* post release * fix quality
-
- 12 Aug, 2024 1 commit
-
-
Linoy Tsaban authored
* add requirements + fix link to bghira's guide * text ecnoder training fixes * text encoder training fixes * text encoder training fixes * text encoder training fixes * style * add tests * fix encode_prompt call * style * unpack_latents test * fix lora saving * remove default val for max_sequenece_length in encode_prompt * remove default val for max_sequenece_length in encode_prompt * style * testing * style * testing * testing * style * fix sizing issue * style * revert scaling * style * style * scaling test * style * scaling test * remove model pred operation left from pre-conditioning * remove model pred operation left from pre-conditioning * fix trainable params * remove te2 from casting * transformer to accelerator * remove prints * empty commit
-
- 09 Aug, 2024 1 commit
-
-
Linoy Tsaban authored
* initial commit - dreambooth for flux * update transformer to be FluxTransformer2DModel * update training loop and validation inference * fix sd3->flux docs * add guidance handling, not sure if it makes sense(?) * inital dreambooth lora commit * fix text_ids in compute_text_embeddings * fix imports of static methods * fix pipeline loading in readme, remove auto1111 docs for now * fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs * Update examples/dreambooth/train_dreambooth_flux.py Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> * fix te2 loading and remove te2 refs from text encoder training * fix tokenizer_2 initialization * remove text_encoder training refs from lora script (for now) * try with vae in bfloat16, fix model hook save * fix tokenization * fix static imports * fix CLIP import * remove text_encoder training refs (for now) from lora script * fix minor bug in encode_prompt, add guidance def in lora script, ... * fix unpack_latents args * fix license in readme * add "none" to weighting_scheme options for uniform sampling * style * adapt model saving - remove text encoder refs * adapt model loading - remove text encoder refs * initial commit for readme * Update examples/dreambooth/train_dreambooth_lora_flux.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_flux.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix vae casting * remove precondition_outputs * readme * readme * style * readme * readme * update weighting scheme default & docs * style * add text_encoder training to lora script, change vae_scale_factor value in both * style * text encoder training fixes * style * update readme * minor fixes * fix te params * fix te params --------- Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Jul, 2024 1 commit
-
-
Sayak Paul authored
* SD3 training fixes Co-authored-by:
bghira <59658056+bghira@users.noreply.github.com> * rewrite noise addition part to respect the eqn. * styler * Update examples/dreambooth/README_sd3.md Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com> --------- Co-authored-by:
bghira <59658056+bghira@users.noreply.github.com> Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com>
-
- 05 Jul, 2024 1 commit
-
-
apolinário authored
* Improve trainer model cards * Update train_dreambooth_sd3.py * Update train_dreambooth_lora_sd3.py * add link to adapters loading doc * Update train_dreambooth_lora_sd3.py --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 24 Jun, 2024 1 commit
-
-
Tolga Cangöz authored
* Fix typos & improve contributing page * `make style && make quality` * fix typos * Fix typo --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Jun, 2024 1 commit
-
-
satani99 authored
* Update train_dreambooth_lora_sd3.py * Update train_dreambooth_lora_sd3.py * Update train_dreambooth_sd3.py --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 19 Jun, 2024 1 commit
-
-
Sayak Paul authored
* change to logit_normal as the weighting scheme * sensible default mote
-
- 18 Jun, 2024 2 commits
-
-
Sayak Paul authored
refactor the density and weighting utilities.
-
Bagheera authored
Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com>
-