1. 28 Nov, 2025 2 commits
  2. 27 Nov, 2025 2 commits
  3. 26 Nov, 2025 6 commits
    • Jerry Wu's avatar
      Support unittest for Z-image ️ (#12715) · e6d46123
      Jerry Wu authored
      
      
      * Add Support for Z-Image.
      
      * Reformatting with make style, black & isort.
      
      * Remove init, Modify import utils, Merge forward in transformers block, Remove once func in pipeline.
      
      * modified main model forward, freqs_cis left
      
      * refactored to add B dim
      
      * fixed stack issue
      
      * fixed modulation bug
      
      * fixed modulation bug
      
      * fix bug
      
      * remove value_from_time_aware_config
      
      * styling
      
      * Fix neg embed and devide / bug; Reuse pad zero tensor; Turn cat -> repeat; Add hint for attn processor.
      
      * Replace padding with pad_sequence; Add gradient checkpointing.
      
      * Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that.
      
      * Fix Docstring and Make Style.
      
      * Revert "Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that."
      
      This reverts commit fbf26b7ed11d55146103c97740bad4a5f91744e0.
      
      * update z-image docstring
      
      * Revert attention dispatcher
      
      * update z-image docstring
      
      * styling
      
      * Recover attention_dispatch.py with its origin impl, later would special commit for fa3 compatibility.
      
      * Fix prev bug, and support for prompt_embeds pass in args after prompt pre-encode as List of torch Tensor.
      
      * Remove einop dependency.
      
      * remove redundant imports & make fix-copies
      
      * fix import
      
      * Support for num_images_per_prompt>1; Remove redundant unquote variables.
      
      * Fix bugs for num_images_per_prompt with actual batch.
      
      * Add unit tests for Z-Image.
      
      * Refine unitest and skip for cases needed separate test env; Fix compatibility with unitest in model, mostly precision formating.
      
      * Add clean env for test_save_load_float16 separ test; Add Note; Styling.
      
      * Update dtype mentioned by yiyi.
      
      ---------
      Co-authored-by: default avatarliudongyang <liudongyang0114@gmail.com>
      e6d46123
    • David El Malih's avatar
      Improve docstrings and type hints in scheduling_dpmsolver_multistep.py (#12710) · a88a7b4f
      David El Malih authored
      * Improve docstrings and type hints in multiple diffusion schedulers
      
      * docs: update Imagen Video paper link to Hugging Face Papers.
      a88a7b4f
    • Sayak Paul's avatar
      [docs] put autopipeline after overview and hunyuanimage in images (#12548) · c8656ed7
      Sayak Paul authored
      put autopipeline after overview and hunyuanimage in images
      c8656ed7
    • Sayak Paul's avatar
      [docs] Correct flux2 links (#12716) · 94c9613f
      Sayak Paul authored
      * fix links
      
      * up
      94c9613f
    • Sayak Paul's avatar
      [lora]: Fix Flux2 LoRA NaN test (#12714) · b91e8c0d
      Sayak Paul authored
      
      
      * up
      
      * Update tests/lora/test_lora_layers_flux2.py
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      b91e8c0d
    • Andrei Filatov's avatar
  4. 25 Nov, 2025 3 commits
    • Sayak Paul's avatar
      let's go Flux2 🚀 (#12711) · 5ffb73d4
      Sayak Paul authored
      
      
      * add vae
      
      * Initial commit for Flux 2 Transformer implementation
      
      * add pipeline part
      
      * small edits to the pipeline and conversion
      
      * update conversion script
      
      * fix
      
      * up up
      
      * finish pipeline
      
      * Remove Flux IP Adapter logic for now
      
      * Remove deprecated 3D id logic
      
      * Remove ControlNet logic for now
      
      * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block
      
      * update pipeline
      
      * Don't use biases for input projs and output AdaNorm
      
      * up
      
      * Remove bias for double stream block text QKV projections
      
      * Add script to convert Flux 2 transformer to diffusers
      
      * make style and make quality
      
      * fix a few things.
      
      * allow sft files to go.
      
      * fix image processor
      
      * fix batch
      
      * style a bit
      
      * Fix some bugs in Flux 2 transformer implementation
      
      * Fix dummy input preparation and fix some test bugs
      
      * fix dtype casting in timestep guidance module.
      
      * resolve conflicts.,
      
      * remove ip adapter stuff.
      
      * Fix Flux 2 transformer consistency test
      
      * Fix bug in Flux2TransformerBlock (double stream block)
      
      * Get remaining Flux 2 transformer tests passing
      
      * make style; make quality; make fix-copies
      
      * remove stuff.
      
      * fix type annotaton.
      
      * remove unneeded stuff from tests
      
      * tests
      
      * up
      
      * up
      
      * add sf support
      
      * Remove unused IP Adapter and ControlNet logic from transformer (#9)
      
      * copied from
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * Refactor Flux2Attention into separate classes for double stream and single stream attention
      
      * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion
      
      * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False
      
      * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion
      
      * Address review comments
      
      * Update src/diffusers/pipelines/flux2/pipeline_flux2.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * up
      
      * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12)
      
      * up
      
      * support ostris loras. (#13)
      
      * up
      
      * update schdule
      
      * up
      
      * up (#17)
      
      * add training scripts (#16)
      
      * add training scripts
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      
      * model cpu offload in validation.
      
      * add flux.2 readme
      
      * add img2img and tests
      
      * cpu offload in log validation
      
      * Apply suggestions from code review
      
      * fix
      
      * up
      
      * fixes
      
      * remove i2i training tests for now.
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      Co-authored-by: default avatarDaniel Gu <dgu8957@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal>
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      5ffb73d4
    • Jerry Wu's avatar
      Add Support for Z-Image Series (#12703) · 4088e8a8
      Jerry Wu authored
      
      
      * Add Support for Z-Image.
      
      * Reformatting with make style, black & isort.
      
      * Remove init, Modify import utils, Merge forward in transformers block, Remove once func in pipeline.
      
      * modified main model forward, freqs_cis left
      
      * refactored to add B dim
      
      * fixed stack issue
      
      * fixed modulation bug
      
      * fixed modulation bug
      
      * fix bug
      
      * remove value_from_time_aware_config
      
      * styling
      
      * Fix neg embed and devide / bug; Reuse pad zero tensor; Turn cat -> repeat; Add hint for attn processor.
      
      * Replace padding with pad_sequence; Add gradient checkpointing.
      
      * Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that.
      
      * Fix Docstring and Make Style.
      
      * Revert "Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that."
      
      This reverts commit fbf26b7ed11d55146103c97740bad4a5f91744e0.
      
      * update z-image docstring
      
      * Revert attention dispatcher
      
      * update z-image docstring
      
      * styling
      
      * Recover attention_dispatch.py with its origin impl, later would special commit for fa3 compatibility.
      
      * Fix prev bug, and support for prompt_embeds pass in args after prompt pre-encode as List of torch Tensor.
      
      * Remove einop dependency.
      
      * remove redundant imports & make fix-copies
      
      * fix import
      
      ---------
      Co-authored-by: default avatarliudongyang <liudongyang0114@gmail.com>
      4088e8a8
    • Junsong Chen's avatar
      fix typo in docs (#12675) · d33d9f67
      Junsong Chen authored
      
      
      * fix typo in docs
      
      * Update docs/source/en/api/pipelines/sana_video.md
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      d33d9f67
  5. 24 Nov, 2025 5 commits
    • sq's avatar
      Fix variable naming typos in community FluxControlNetFillInpaintPipeline (#12701) · dde8754b
      sq authored
      - Fixed variable naming typos (maskkk -> mask_fill, mask_imagee -> mask_image_fill, masked_imagee -> masked_image_fill, masked_image_latentsss -> masked_latents_fill)
      
      These changes improve code readability without affecting functionality.
      dde8754b
    • cdutr's avatar
      [i8n-pt] Fix grammar and expand Portuguese documentation (#12598) · fbcd3ba6
      cdutr authored
      * Updates Portuguese documentation for Diffusers library
      
      Enhances the Portuguese documentation with:
      - Restructured table of contents for improved navigation
      - Added placeholder page for in-translation content
      - Refined language and improved readability in existing pages
      - Introduced a new page on basic Stable Diffusion performance guidance
      
      Improves overall documentation structure and user experience for Portuguese-speaking users
      
      * Removes untranslated sections from Portuguese documentation
      
      Cleans up the Portuguese documentation table of contents by removing placeholder sections marked as "Em tradução" (In translation)
      
      Removes the in_translation.md file and associated table of contents entries for sections that are not yet translated, improving documentation clarity
      fbcd3ba6
    • Sayak Paul's avatar
      [core] support sage attention + FA2 through `kernels` (#12439) · d176f61f
      Sayak Paul authored
      * up
      
      * support automatic dispatch.
      
      * disable compile support for now./
      
      * up
      
      * flash too.
      
      * document.
      
      * up
      
      * up
      
      * up
      
      * up
      d176f61f
    • DefTruth's avatar
      bugfix: fix chrono-edit context parallel (#12660) · 354d35ad
      DefTruth authored
      
      
      * bugfix: fix chrono-edit context parallel
      
      * bugfix: fix chrono-edit context parallel
      
      * Update src/diffusers/models/transformers/transformer_chronoedit.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_chronoedit.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Clean up comments in transformer_chronoedit.py
      
      Removed unnecessary comments regarding parallelization in cross-attention.
      
      * fix style
      
      * fix qc
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      354d35ad
    • SwayStar123's avatar
      Add FluxLoraLoaderMixin to Fibo pipeline (#12688) · 544ba677
      SwayStar123 authored
      Update pipeline_bria_fibo.py
      544ba677
  6. 21 Nov, 2025 1 commit
    • David El Malih's avatar
      Improve docstrings and type hints in scheduling_lms_discrete.py (#12678) · 6f1042e3
      David El Malih authored
      * Enhance type hints and docstrings in LMSDiscreteScheduler class
      
      Updated type hints for function parameters and return types to improve code clarity and maintainability. Enhanced docstrings for several methods, providing clearer descriptions of their functionality and expected arguments. Notable changes include specifying Literal types for certain parameters and ensuring consistent return type annotations across the class.
      
      * docs: Add specific paper reference to `_convert_to_karras` docstring.
      
      * Refactor `_convert_to_karras` docstring in DPMSolverSDEScheduler to include detailed descriptions and a specific paper reference, enhancing clarity and documentation consistency.
      6f1042e3
  7. 19 Nov, 2025 5 commits
  8. 18 Nov, 2025 1 commit
  9. 17 Nov, 2025 4 commits
  10. 15 Nov, 2025 2 commits
  11. 14 Nov, 2025 2 commits
  12. 13 Nov, 2025 7 commits