- 25 Nov, 2025 1 commit
-
-
Sayak Paul authored
* add vae * Initial commit for Flux 2 Transformer implementation * add pipeline part * small edits to the pipeline and conversion * update conversion script * fix * up up * finish pipeline * Remove Flux IP Adapter logic for now * Remove deprecated 3D id logic * Remove ControlNet logic for now * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block * update pipeline * Don't use biases for input projs and output AdaNorm * up * Remove bias for double stream block text QKV projections * Add script to convert Flux 2 transformer to diffusers * make style and make quality * fix a few things. * allow sft files to go. * fix image processor * fix batch * style a bit * Fix some bugs in Flux 2 transformer implementation * Fix dummy input preparation and fix some test bugs * fix dtype casting in timestep guidance module. * resolve conflicts., * remove ip adapter stuff. * Fix Flux 2 transformer consistency test * Fix bug in Flux2TransformerBlock (double stream block) * Get remaining Flux 2 transformer tests passing * make style; make quality; make fix-copies * remove stuff. * fix type annotaton. * remove unneeded stuff from tests * tests * up * up * add sf support * Remove unused IP Adapter and ControlNet logic from transformer (#9) * copied from * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> * up * up * up * up * up * Refactor Flux2Attention into separate classes for double stream and single stream attention * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion * Address review comments * Update src/diffusers/pipelines/flux2/pipeline_flux2.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * up * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12) * up * support ostris loras. (#13) * up * update schdule * up * up (#17) * add training scripts (#16) * add training scripts Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> * model cpu offload in validation. * add flux.2 readme * add img2img and tests * cpu offload in log validation * Apply suggestions from code review * fix * up * fixes * remove i2i training tests for now. --------- Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by:
linoytsaban <linoy@huggingface.co> * up --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
Daniel Gu <dgu8957@gmail.com> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by:
linoytsaban <linoy@huggingface.co>
-
- 07 Oct, 2025 1 commit
-
-
Linoy Tsaban authored
* fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled
-
- 03 Oct, 2025 1 commit
-
-
Linoy Tsaban authored
* make qwen and kontext uv compatible * add torchvision * add torchvision * add datasets, bitsandbytes, prodigyopt --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 19 Aug, 2025 1 commit
-
-
Sayak Paul authored
* post release v0.35.0 * quality
-
- 05 Aug, 2025 1 commit
-
-
Sayak Paul authored
* feat: support lora in qwen image and training script * up * up * up * up * up * up * add lora tests * fix * add tests * fix * reviewer feedback * up[ * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 29 Jul, 2025 1 commit
-
-
Álvaro Somoza authored
* login * more logins * uploads * missed login * another missed login * downloads * examples and more logins * fix * setup * Apply style fixes * fix * Apply style fixes
-
- 16 Jul, 2025 1 commit
-
-
Sayak Paul authored
* add an offload utility that can be used as a context manager. * update --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 26 Jun, 2025 1 commit
-
-
Sayak Paul authored
* post release v0.34.0 * code quality --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 18 Jun, 2025 1 commit
-
-
Leo Jiang authored
* [training] add ds support to lora hidream * Apply style fixes --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 08 May, 2025 1 commit
-
-
Linoy Tsaban authored
* add lora_alpha and lora_dropout * Apply style fixes * add lora_alpha and lora_dropout * Apply style fixes * revert lora_alpha until #11324 is merged * Apply style fixes * empty commit --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 05 May, 2025 1 commit
-
-
Sayak Paul authored
* feat: enable quantization for hidream lora training. * better handle compute dtype. * finalize. * fix dtype. --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 01 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in docs and comments * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 28 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
remove unnecessary pipeline moving to cpu in validation Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
* 1. add pre-computation of prompt embeddings when custom prompts are used as well 2. save model card even if model is not pushed to hub 3. remove scheduler initialization from code example - not necessary anymore (it's now if the base model's config) 4. add skip_final_inference - to allow to run with validation, but skip the final loading of the pipeline with the lora weights to reduce memory reqs * pre encode validation prompt as well * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * pre encode validation prompt as well * Apply style fixes * empty commit * change default trained modules * empty commit * address comments + change encoding of validation prompt (before it was only pre-encoded if custom prompts are provided, but should be pre-encoded either way) * Apply style fixes * empty commit * fix validation_embeddings definition * fix final inference condition * fix pipeline deletion in last inference * Apply style fixes * empty commit * layers * remove readme remarks on only pre-computing when instance prompt is provided and change example to 3d icons * smol fix * empty commit --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 22 Apr, 2025 1 commit
-
-
Linoy Tsaban authored
* initial commit * initial commit * initial commit * initial commit * initial commit * initial commit * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> * move prompt embeds, pooled embeds outside * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
hlky <hlky@hlky.ac> * fix import * fix import and tokenizer 4, text encoder 4 loading * te * prompt embeds * fix naming * shapes * initial commit to add HiDreamImageLoraLoaderMixin * fix init * add tests * loader * fix model input * add code example to readme * fix default max length of text encoders * prints * nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training * smol fix * unpatchify * unpatchify * fix validation * flip pred and loss * fix shift!!! * revert unpatchify changes (for now) * smol fix * Apply style fixes * workaround moe training * workaround moe training * remove prints * to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae) https://github.com/huggingface/diffusers/blob/bbd0c161b55ba2234304f1e6325832dd69c60565/examples/dreambooth/train_dreambooth_lora_flux.py#L1207 * refactor to align with HiDream refactor * refactor to align with HiDream refactor * refactor to align with HiDream refactor * add support for cpu offloading of text encoders * Apply style fixes * adjust lr and rank for train example * fix copies * Apply style fixes * update README * update README * update README * fix license * keep prompt2,3,4 as None in validation * remove reverse ode comment * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/train_dreambooth_lora_hidream.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * vae offload change * fix text encoder offloading * Apply style fixes * cleaner to_kwargs * fix module name in copied from * add requirements * fix offloading * fix offloading * fix offloading * update transformers version in reqs * try AutoTokenizer * try AutoTokenizer * Apply style fixes * empty commit * Delete tests/lora/test_lora_layers_hidream.py * change tokenizer_4 to load with AutoTokenizer as well * make text_encoder_four and tokenizer_four configurable * save model card * save model card * revert T5 * fix test * remove non diffusers lumina2 conversion --------- Co-authored-by:
Bagheera <59658056+bghira@users.noreply.github.com> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 15 Apr, 2025 1 commit
-
-
Sayak Paul authored
* post release * update * fix deprecations * remaining * update --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 09 Apr, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update
-
- 04 Mar, 2025 1 commit
-
-
Alexey Zolotenkov authored
* Fix seed initialization to handle args.seed = 0 correctly * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 20 Feb, 2025 1 commit
-
-
Sayak Paul authored
* feat: lora support for Lumina2. * fix-copies. * updates * updates * docs. * fix * add: training script. * tests * updates * updates * major updates. * updates * fixes * docs. * updates * updates
-
- 06 Feb, 2025 1 commit
-
-
Leo Jiang authored
* NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * NPU Adaption for Sanna * [bugfix]NPU Adaption for Sanna --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Jan, 2025 1 commit
-
-
Leo Jiang authored
* NPU Adaption for Sanna --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 21 Jan, 2025 1 commit
-
-
Sayak Paul authored
change licensing to 2025 from 2024.
-
- 15 Jan, 2025 1 commit
-
-
Leo Jiang authored
Co-authored-by:J石页 <jiangshuo9@h-partners.com>
-
- 23 Dec, 2024 2 commits
-
-
Sayak Paul authored
* post release 0.32.0 * stylew
-
Sayak Paul authored
* sana lora training tests and misc. * remove push to hub * Update examples/dreambooth/train_dreambooth_lora_sana.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 18 Dec, 2024 1 commit
-
-
Sayak Paul authored
* feat: lora support for SANA. * make fix-copies * rename test class. * attention_kwargs -> cross_attention_kwargs. * Revert "attention_kwargs -> cross_attention_kwargs." This reverts commit 23433bf9bccc12e0f2f55df26bae58a894e8b43b. * exhaust 119 max line limit * sana lora fine-tuning script. * readme * add a note about the supported models. * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * style * docs for attention_kwargs. * remove lora_scale from pag pipeline. * copy fix --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 19 Nov, 2024 1 commit
-
-
Linoy Tsaban authored
* memory improvement as done here: https://github.com/huggingface/diffusers/pull/9829 * fix bug * fix bug * style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 01 Nov, 2024 2 commits
-
-
Leo Jiang authored
* Improve NPU performance * Improve NPU performance * Improve NPU performance * Improve NPU performance * [bugfix] bugfix for npu free memory * [bugfix] bugfix for npu free memory * [bugfix] bugfix for npu free memory * Reduce memory cost for flux training process --------- Co-authored-by:
蒋硕 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Boseong Jeon authored
Handling mixed precision and add unwarp Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 31 Oct, 2024 1 commit
-
-
Sayak Paul authored
* use the lr when using 8bit adam. * remove lr as we pack it in params_to_optimize. --------- Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 28 Oct, 2024 2 commits
-
-
Linoy Tsaban authored
* make lora target modules configurable and change the default * style * make lora target modules configurable and change the default * fix bug when using prodigy and training te * fix mixed precision training as proposed in https://github.com/huggingface/diffusers/pull/9565 for full dreambooth as well * add test and notes * style * address sayaks comments * style * fix test --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Biswaroop authored
[Fix] remove setting lr for T5 text encoder when using prodigy in flux dreambooth lora script (#9473) * fix: removed setting of text encoder lr for T5 as it's not being tuned * fix: removed setting of text encoder lr for T5 as it's not being tuned --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
- 25 Oct, 2024 1 commit
-
-
Ina authored
* flux pipline: readability enhancement.
-
- 22 Oct, 2024 1 commit
-
-
Sayak Paul authored
* post-release * style
-
- 15 Oct, 2024 1 commit
-
-
0x名無し authored
* fixed issue #9350, Tensor is deprecated * ran make style
-
- 28 Sep, 2024 1 commit
-
-
Sayak Paul authored
* fix: retain memory utility. * fix * quality * free_memory.
-
- 15 Sep, 2024 1 commit
-
-
Linoy Tsaban authored
* add ostris trainer to README & add cache latents of vae * add ostris trainer to README & add cache latents of vae * style * readme * add test for latent caching * add ostris noise scheduler https://github.com/ostris/ai-toolkit/blob/9ee1ef2a0a2a9a02b92d114a95f21312e5906e54/toolkit/samplers/custom_flowmatch_sampler.py#L95 * style * fix import * style * fix tests * style * --change upcasting of transformer? * update readme according to main * keep only latent caching * add configurable param for final saving of trained layers- --upcast_before_saving * style * Update examples/dreambooth/README_flux.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update examples/dreambooth/README_flux.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * use clear_objs_and_retain_memory from utilities * style --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 14 Sep, 2024 1 commit
-
-
Leo Jiang authored
* Fix dtype error * [bugfix] Fixed the issue on sd3 dreambooth training * [bugfix] Fixed the issue on sd3 dreambooth training --------- Co-authored-by:
蒋硕 <jiangshuo9@h-partners.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 11 Sep, 2024 1 commit
-
-
Sayak Paul authored
fix some fast gpu tests.
-
- 14 Aug, 2024 1 commit
-
-
Álvaro Somoza authored
* post release * fix quality
-