1. 25 Nov, 2025 1 commit
    • Sayak Paul's avatar
      let's go Flux2 🚀 (#12711) · 5ffb73d4
      Sayak Paul authored
      
      
      * add vae
      
      * Initial commit for Flux 2 Transformer implementation
      
      * add pipeline part
      
      * small edits to the pipeline and conversion
      
      * update conversion script
      
      * fix
      
      * up up
      
      * finish pipeline
      
      * Remove Flux IP Adapter logic for now
      
      * Remove deprecated 3D id logic
      
      * Remove ControlNet logic for now
      
      * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block
      
      * update pipeline
      
      * Don't use biases for input projs and output AdaNorm
      
      * up
      
      * Remove bias for double stream block text QKV projections
      
      * Add script to convert Flux 2 transformer to diffusers
      
      * make style and make quality
      
      * fix a few things.
      
      * allow sft files to go.
      
      * fix image processor
      
      * fix batch
      
      * style a bit
      
      * Fix some bugs in Flux 2 transformer implementation
      
      * Fix dummy input preparation and fix some test bugs
      
      * fix dtype casting in timestep guidance module.
      
      * resolve conflicts.,
      
      * remove ip adapter stuff.
      
      * Fix Flux 2 transformer consistency test
      
      * Fix bug in Flux2TransformerBlock (double stream block)
      
      * Get remaining Flux 2 transformer tests passing
      
      * make style; make quality; make fix-copies
      
      * remove stuff.
      
      * fix type annotaton.
      
      * remove unneeded stuff from tests
      
      * tests
      
      * up
      
      * up
      
      * add sf support
      
      * Remove unused IP Adapter and ControlNet logic from transformer (#9)
      
      * copied from
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * Refactor Flux2Attention into separate classes for double stream and single stream attention
      
      * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion
      
      * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False
      
      * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion
      
      * Address review comments
      
      * Update src/diffusers/pipelines/flux2/pipeline_flux2.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * up
      
      * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12)
      
      * up
      
      * support ostris loras. (#13)
      
      * up
      
      * update schdule
      
      * up
      
      * up (#17)
      
      * add training scripts (#16)
      
      * add training scripts
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      
      * model cpu offload in validation.
      
      * add flux.2 readme
      
      * add img2img and tests
      
      * cpu offload in log validation
      
      * Apply suggestions from code review
      
      * fix
      
      * up
      
      * fixes
      
      * remove i2i training tests for now.
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      Co-authored-by: default avatarDaniel Gu <dgu8957@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal>
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      5ffb73d4
  2. 12 Nov, 2025 1 commit
  3. 29 Jul, 2025 1 commit
  4. 08 Jul, 2025 1 commit
  5. 26 Jun, 2025 1 commit
  6. 17 Jun, 2025 1 commit
  7. 27 May, 2025 1 commit
  8. 19 May, 2025 1 commit
  9. 01 May, 2025 1 commit
  10. 25 Nov, 2024 1 commit
  11. 28 Oct, 2024 1 commit
  12. 15 Sep, 2024 1 commit
  13. 05 Sep, 2024 1 commit
  14. 18 Aug, 2024 1 commit
  15. 12 Aug, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux Dreambooth LoRA] - te bug fixes & updates (#9139) · 413ca29b
      Linoy Tsaban authored
      * add requirements + fix link to bghira's guide
      
      * text ecnoder training fixes
      
      * text encoder training fixes
      
      * text encoder training fixes
      
      * text encoder training fixes
      
      * style
      
      * add tests
      
      * fix encode_prompt call
      
      * style
      
      * unpack_latents test
      
      * fix lora saving
      
      * remove default val for max_sequenece_length in encode_prompt
      
      * remove default val for max_sequenece_length in encode_prompt
      
      * style
      
      * testing
      
      * style
      
      * testing
      
      * testing
      
      * style
      
      * fix sizing issue
      
      * style
      
      * revert scaling
      
      * style
      
      * style
      
      * scaling test
      
      * style
      
      * scaling test
      
      * remove model pred operation left from pre-conditioning
      
      * remove model pred operation left from pre-conditioning
      
      * fix trainable params
      
      * remove te2 from casting
      
      * transformer to accelerator
      
      * remove prints
      
      * empty commit
      413ca29b
  16. 09 Aug, 2024 1 commit
    • Linoy Tsaban's avatar
      [Flux] Dreambooth LoRA training scripts (#9086) · 65e30907
      Linoy Tsaban authored
      
      
      * initial commit - dreambooth for flux
      
      * update transformer to be FluxTransformer2DModel
      
      * update training loop and validation inference
      
      * fix sd3->flux docs
      
      * add guidance handling, not sure if it makes sense(?)
      
      * inital dreambooth lora commit
      
      * fix text_ids in compute_text_embeddings
      
      * fix imports of static methods
      
      * fix pipeline loading in readme, remove auto1111 docs for now
      
      * fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs
      
      * Update examples/dreambooth/train_dreambooth_flux.py
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      
      * fix te2 loading and remove te2 refs from text encoder training
      
      * fix tokenizer_2 initialization
      
      * remove text_encoder training refs from lora script (for now)
      
      * try with vae in bfloat16, fix model hook save
      
      * fix tokenization
      
      * fix static imports
      
      * fix CLIP import
      
      * remove text_encoder training refs (for now) from lora script
      
      * fix minor bug in encode_prompt, add guidance def in lora script, ...
      
      * fix unpack_latents args
      
      * fix license in readme
      
      * add "none" to weighting_scheme options for uniform sampling
      
      * style
      
      * adapt model saving - remove text encoder refs
      
      * adapt model loading - remove text encoder refs
      
      * initial commit for readme
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update examples/dreambooth/train_dreambooth_lora_flux.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix vae casting
      
      * remove precondition_outputs
      
      * readme
      
      * readme
      
      * style
      
      * readme
      
      * readme
      
      * update weighting scheme default & docs
      
      * style
      
      * add text_encoder training to lora script, change vae_scale_factor value in both
      
      * style
      
      * text encoder training fixes
      
      * style
      
      * update readme
      
      * minor fixes
      
      * fix te params
      
      * fix te params
      
      ---------
      Co-authored-by: default avatarBagheera <59658056+bghira@users.noreply.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      65e30907
  17. 03 Aug, 2024 1 commit
  18. 21 Jul, 2024 1 commit
  19. 25 Jun, 2024 1 commit
  20. 24 Jun, 2024 1 commit
  21. 19 Jun, 2024 1 commit
  22. 17 Jun, 2024 1 commit
  23. 12 Jun, 2024 2 commits