1. 30 Oct, 2025 1 commit
  2. 28 Oct, 2025 4 commits
    • galbria's avatar
      Bria fibo (#12545) · 84e16575
      galbria authored
      
      
      * Bria FIBO pipeline
      
      * style fixs
      
      * fix CR
      
      * Refactor BriaFibo classes and update pipeline parameters
      
      - Updated BriaFiboAttnProcessor and BriaFiboAttention classes to reflect changes from Flux equivalents.
      - Modified the _unpack_latents method in BriaFiboPipeline to improve clarity.
      - Increased the default max_sequence_length to 3000 and added a new optional parameter do_patching.
      - Cleaned up test_pipeline_bria_fibo.py by removing unused imports and skipping unsupported tests.
      
      * edit the docs of FIBO
      
      * Remove unused BriaFibo imports and update CPU offload method in BriaFiboPipeline
      
      * Refactor FIBO classes to BriaFibo naming convention
      
      - Updated class names from FIBO to BriaFibo for consistency across the module.
      - Modified instances of FIBOEmbedND, FIBOTimesteps, TextProjection, and TimestepProjEmbeddings to reflect the new naming.
      - Ensured all references in the BriaFiboTransformer2DModel are updated accordingly.
      
      * Add BriaFiboTransformer2DModel import to transformers module
      
      * Remove unused BriaFibo imports from modular pipelines and add BriaFiboTransformer2DModel and BriaFiboPipeline classes to dummy objects for enhanced compatibility with torch and transformers.
      
      * Update BriaFibo classes with copied documentation and fix import typo in pipeline module
      
      - Added documentation comments indicating the source of copied code in BriaFiboTransformerBlock and _pack_latents methods.
      - Corrected the import statement for BriaFiboPipeline in the pipelines module.
      
      * Remove unused BriaFibo imports from __init__.py to streamline modular pipelines.
      
      * Refactor documentation comments in BriaFibo classes to indicate inspiration from existing implementations
      
      - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to reflect that the code is inspired by other modules rather than copied.
      - Enhanced clarity on the origins of the methods to maintain proper attribution.
      
      * change Inspired by to Based on
      
      * add reference link and fix trailing whitespace
      
      * Add BriaFiboTransformer2DModel documentation and update comments in BriaFibo classes
      
      - Introduced a new documentation file for BriaFiboTransformer2DModel.
      - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to clarify the origins of the code, indicating copied sources for better attribution.
      
      ---------
      Co-authored-by: default avatarsayakpaul <spsayakpaul@gmail.com>
      84e16575
    • Wang, Yi's avatar
      fix crash if tiling mode is enabled (#12521) · dc622a95
      Wang, Yi authored
      
      
      * fix crash in tiling mode is enabled
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      * fmt
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      ---------
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      dc622a95
    • G.O.D's avatar
      Improve pos embed for Flux.1 inference on Ascend NPU (#12534) · 303efd2b
      G.O.D authored
      
      
      improve pos embed for ascend npu
      Co-authored-by: default avatarfelix01.yu <felix01.yu@vipshop.com>
      303efd2b
    • Lev Novitskiy's avatar
      Kandinsky 5 10 sec (NABLA suport) (#12520) · 5afbcce1
      Lev Novitskiy authored
      
      
      * add transformer pipeline first version
      
      * updates
      
      * fix 5sec generation
      
      * rewrite Kandinsky5T2VPipeline to diffusers style
      
      * add multiprompt support
      
      * remove prints in pipeline
      
      * add nabla attention
      
      * Wrap Transformer in Diffusers style
      
      * fix license
      
      * fix prompt type
      
      * add gradient checkpointing and peft support
      
      * add usage example
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * remove unused imports
      
      * add 10 second models support
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * remove no_grad and simplified prompt paddings
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * moved template to __init__
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * moved sdps inside processor
      
      * remove oneline function
      
      * remove reset_dtype methods
      
      * Transformer: move all methods to forward
      
      * separated prompt encoding
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * refactoring
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * refactoring acording to https://github.com/huggingface/diffusers/commit/acabbc0033d4b4933fc651766a4aa026db2e6dc1
      
      
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * fixed
      
      * style +copies
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarCharles <charles@huggingface.co>
      
      * more
      
      * Apply suggestions from code review
      
      * add lora loader doc
      
      * add compiled Nabla Attention
      
      * all needed changes for 10 sec models are added!
      
      * add docs
      
      * Apply style fixes
      
      * update docs
      
      * add kandinsky5 to toctree
      
      * add tests
      
      * fix tests
      
      * Apply style fixes
      
      * update tests
      
      ---------
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarCharles <charles@huggingface.co>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      5afbcce1
  3. 27 Oct, 2025 2 commits
  4. 24 Oct, 2025 1 commit
  5. 23 Oct, 2025 1 commit
  6. 22 Oct, 2025 3 commits
  7. 21 Oct, 2025 1 commit
  8. 20 Oct, 2025 1 commit
  9. 18 Oct, 2025 1 commit
  10. 17 Oct, 2025 1 commit
  11. 15 Oct, 2025 1 commit
  12. 05 Oct, 2025 1 commit
  13. 02 Oct, 2025 1 commit
  14. 30 Sep, 2025 1 commit
  15. 25 Sep, 2025 1 commit
  16. 24 Sep, 2025 2 commits
    • Aryan's avatar
      Context Parallel w/ Ring & Ulysses & Unified Attention (#11941) · dcb6dd9b
      Aryan authored
      
      
      * update
      
      * update
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * improve test
      
      * handle ip adapter params correctly
      
      * fix chroma qkv fusion test
      
      * fix fastercache implementation
      
      * fix more tests
      
      * fight more tests
      
      * add back set_attention_backend
      
      * update
      
      * update
      
      * make style
      
      * make fix-copies
      
      * make ip adapter processor compatible with attention dispatcher
      
      * refactor chroma as well
      
      * remove rmsnorm assert
      
      * minify and deprecate npu/xla processors
      
      * update
      
      * refactor
      
      * refactor; support flash attention 2 with cp
      
      * fix
      
      * support sage attention with cp
      
      * make torch compile compatible
      
      * update
      
      * refactor
      
      * update
      
      * refactor
      
      * refactor
      
      * add ulysses backward
      
      * try to make dreambooth script work; accelerator backward not playing well
      
      * Revert "try to make dreambooth script work; accelerator backward not playing well"
      
      This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449.
      
      * workaround compilation problems with triton when doing all-to-all
      
      * support wan
      
      * handle backward correctly
      
      * support qwen
      
      * support ltx
      
      * make fix-copies
      
      * Update src/diffusers/models/modeling_utils.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * apply review suggestions
      
      * update docs
      
      * add explanation
      
      * make fix-copies
      
      * add docstrings
      
      * support passing parallel_config to from_pretrained
      
      * apply review suggestions
      
      * make style
      
      * update
      
      * Update docs/source/en/api/parallel.md
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarsayakpaul <spsayakpaul@gmail.com>
      dcb6dd9b
    • Dhruv Nair's avatar
      Fix Custom Code loading (#12378) · 7c54a7b3
      Dhruv Nair authored
      * update
      
      * update
      
      * update
      7c54a7b3
  17. 23 Sep, 2025 1 commit
  18. 22 Sep, 2025 2 commits
  19. 17 Sep, 2025 1 commit
    • DefTruth's avatar
      Fix many type hint errors (#12289) · efb7a299
      DefTruth authored
      * fix hidream type hint
      
      * fix hunyuan-video type hint
      
      * fix many type hint
      
      * fix many type hint errors
      
      * fix many type hint errors
      
      * fix many type hint errors
      
      * make stype & make quality
      efb7a299
  20. 16 Sep, 2025 2 commits
  21. 03 Sep, 2025 2 commits
  22. 30 Aug, 2025 1 commit
  23. 26 Aug, 2025 3 commits
    • Sayak Paul's avatar
      Deprecate Flax support (#12151) · 532f41c9
      Sayak Paul authored
      
      
      * start removing flax stuff.
      
      * add deprecation warning.
      
      * add warning messages.
      
      * more warnings.
      
      * remove dockerfiles.
      
      * remove more.
      
      * Update src/diffusers/models/attention_flax.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * up
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      532f41c9
    • Tolga Cangöz's avatar
      Propose to update & upgrade SkyReels-V2 (#12167) · 5fcd5f56
      Tolga Cangöz authored
      * fix: update SkyReels-V2 documentation and moving into attn dispatcher
      
      * Refactors SkyReelsV2's attention implementation
      
      * style
      
      * up
      
      * Fixes formatting in SkyReels-V2 documentation
      
      Wraps the visual demonstration section in a Markdown code block.
      
      This change corrects the rendering of ASCII diagrams and examples, improving the overall readability of the document.
      
      * Docs: Condense example arrays in skyreels_v2 guide
      
      Improves the readability of the `step_matrix` examples by replacing long sequences of repeated numbers with a more compact `value×count` notation.
      
      This change makes the underlying data patterns in the examples easier to understand at a glance.
      
      * Add _repeated_blocks attribute to SkyReelsV2Transformer3DModel
      
      * Refactor rotary embedding calculations in SkyReelsV2 to separate cosine and sine frequencies
      
      * Enhance SkyReels-V2 documentation: update model loading for GPU support and remove outdated notes
      
      * up
      
      * up
      
      * Update model_id in SkyReels-V2 documentation
      
      * up
      
      * refactor: remove device_map parameter for model loading and add pipeline.to("cuda") for GPU allocation
      
      * fix: update copyright year to 2025 in skyreels_v2.md
      
      * docs: enhance parameter examples and formatting in skyreels_v2.md
      
      * docs: update example formatting and add notes on LoRA support in skyreels_v2.md
      
      * refactor: remove copied comments from transformer_wan in SkyReelsV2 classes
      
      * Clean up comments in skyreels_v2.md
      
      Removed comments about acceleration helpers and Flash Attention installation.
      
      * Add deprecation warning for `SkyReelsV2AttnProcessor2_0` class
      5fcd5f56
    • Leo Jiang's avatar
      NPU attention refactor for FLUX (#12209) · 0fd7ee79
      Leo Jiang authored
      
      
      * NPU attention refactor for FLUX transformer
      
      * Apply style fixes
      
      ---------
      Co-authored-by: default avatarJ石页 <jiangshuo9@h-partners.com>
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      0fd7ee79
  24. 23 Aug, 2025 1 commit
  25. 22 Aug, 2025 2 commits
  26. 20 Aug, 2025 2 commits
    • Sayak Paul's avatar
      [chore] remove extra validation check in determine_device_map (#12176) · 4fcd0bc7
      Sayak Paul authored
      remove extra validation check in determine_device_map
      4fcd0bc7
    • galbria's avatar
      Bria 3 2 pipeline (#12010) · 7993be9e
      galbria authored
      
      
      * Add Bria model and pipeline to diffusers
      
      - Introduced `BriaTransformer2DModel` and `BriaPipeline` for enhanced image generation capabilities.
      - Updated import structures across various modules to include the new Bria components.
      - Added utility functions and output classes specific to the Bria pipeline.
      - Implemented tests for the Bria pipeline to ensure functionality and output integrity.
      
      * with working tests
      
      * style and quality pass
      
      * adding docs
      
      * add to overview
      
      * fixes from "make fix-copies"
      
      * Refactor transformer_bria.py and pipeline_bria.py: Introduce new EmbedND class for rotary position embedding, and enhance Timestep and TimestepProjEmbeddings classes. Add utility functions for handling negative prompts and generating original sigmas in pipeline_bria.py.
      
      * remove redundent and duplicates tests and fix bf16
      slow test
      
      * style fixes
      
      * small doc update
      
      * Enhance Bria 3.2 documentation and implementation
      
      - Updated the GitHub repository link for Bria 3.2.
      - Added usage instructions for the gated model access.
      - Introduced the BriaTransformerBlock and BriaAttention classes to the model architecture.
      - Refactored existing classes to integrate Bria-specific components, including BriaEmbedND and BriaPipeline.
      - Updated the pipeline output class to reflect Bria-specific functionality.
      - Adjusted test cases to align with the new Bria model structure.
      
      * Refactor Bria model components and update documentation
      
      - Removed outdated inference example from Bria 3.2 documentation.
      - Introduced the BriaTransformerBlock class to enhance model architecture.
      - Updated attention handling to use `attention_kwargs` instead of `joint_attention_kwargs`.
      - Improved import structure in the Bria pipeline to handle optional dependencies.
      - Adjusted test cases to reflect changes in model dtype assertions.
      
      * Update Bria model reference in documentation to reflect new file naming convention
      
      * Update docs/source/en/_toctree.yml
      
      * Refactor BriaPipeline to inherit from DiffusionPipeline instead of FluxPipeline, updating imports accordingly.
      
      * move the __call__ func to the end of file
      
      * Update BriaPipeline example to use bfloat16 for precision sensitivity for better result
      
      * make style && make quality &&  make fix-copiessource
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
      Co-authored-by: default avatarAryan <contact.aryanvs@gmail.com>
      7993be9e