1. 15 Oct, 2025 1 commit
  2. 14 Oct, 2025 2 commits
  3. 13 Oct, 2025 1 commit
  4. 11 Oct, 2025 1 commit
  5. 06 Oct, 2025 1 commit
  6. 30 Sep, 2025 2 commits
  7. 29 Sep, 2025 3 commits
  8. 26 Sep, 2025 1 commit
  9. 24 Sep, 2025 2 commits
    • DefTruth's avatar
      Introduce cache-dit to community optimization (#12366) · 310fdaf5
      DefTruth authored
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * misc: update examples link
      
      * misc: update examples link
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * docs: introduce cache-dit to diffusers
      
      * Refine documentation for CacheDiT features
      
      Updated the wording for clarity and consistency in the documentation. Adjusted sections on cache acceleration, automatic block adapter, patch functor, and hybrid cache configuration.
      310fdaf5
    • Aryan's avatar
      Context Parallel w/ Ring & Ulysses & Unified Attention (#11941) · dcb6dd9b
      Aryan authored
      
      
      * update
      
      * update
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * improve test
      
      * handle ip adapter params correctly
      
      * fix chroma qkv fusion test
      
      * fix fastercache implementation
      
      * fix more tests
      
      * fight more tests
      
      * add back set_attention_backend
      
      * update
      
      * update
      
      * make style
      
      * make fix-copies
      
      * make ip adapter processor compatible with attention dispatcher
      
      * refactor chroma as well
      
      * remove rmsnorm assert
      
      * minify and deprecate npu/xla processors
      
      * update
      
      * refactor
      
      * refactor; support flash attention 2 with cp
      
      * fix
      
      * support sage attention with cp
      
      * make torch compile compatible
      
      * update
      
      * refactor
      
      * update
      
      * refactor
      
      * refactor
      
      * add ulysses backward
      
      * try to make dreambooth script work; accelerator backward not playing well
      
      * Revert "try to make dreambooth script work; accelerator backward not playing well"
      
      This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449.
      
      * workaround compilation problems with triton when doing all-to-all
      
      * support wan
      
      * handle backward correctly
      
      * support qwen
      
      * support ltx
      
      * make fix-copies
      
      * Update src/diffusers/models/modeling_utils.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * apply review suggestions
      
      * update docs
      
      * add explanation
      
      * make fix-copies
      
      * add docstrings
      
      * support passing parallel_config to from_pretrained
      
      * apply review suggestions
      
      * make style
      
      * update
      
      * Update docs/source/en/api/parallel.md
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarsayakpaul <spsayakpaul@gmail.com>
      dcb6dd9b
  10. 23 Sep, 2025 2 commits
  11. 22 Sep, 2025 1 commit
  12. 10 Sep, 2025 2 commits
  13. 08 Sep, 2025 1 commit
  14. 05 Sep, 2025 2 commits
  15. 04 Sep, 2025 1 commit
  16. 03 Sep, 2025 2 commits
    • Ishan Modi's avatar
      [Quantization] Add TRT-ModelOpt as a Backend (#11173) · 4acbfbf1
      Ishan Modi authored
      
      
      * initial commit
      
      * update
      
      * updates
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * addressed PR comments
      
      * update
      
      * addressed PR comments
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * updates
      
      * update
      
      * update
      
      * addressed PR comments
      
      * updates
      
      * code formatting
      
      * update
      
      * addressed PR comments
      
      * addressed PR comments
      
      * addressed PR comments
      
      * addressed PR comments
      
      * fix docs and dependencies
      
      * fixed dependency test
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      4acbfbf1
    • Steven Liu's avatar
      [docs] AutoPipeline (#12160) · 6549b04e
      Steven Liu authored
      * refresh
      
      * feedback
      
      * feedback
      
      * supported models
      
      * fix
      6549b04e
  17. 02 Sep, 2025 1 commit
  18. 31 Aug, 2025 1 commit
  19. 28 Aug, 2025 1 commit
  20. 27 Aug, 2025 2 commits
  21. 26 Aug, 2025 3 commits
    • Manith Ratnayake's avatar
    • Tianqi Tang's avatar
      Fix typos and inconsistencies (#12204) · 4b7fe044
      Tianqi Tang authored
      
      
      Fix typos and test assertions
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      4b7fe044
    • Tolga Cangöz's avatar
      Propose to update & upgrade SkyReels-V2 (#12167) · 5fcd5f56
      Tolga Cangöz authored
      * fix: update SkyReels-V2 documentation and moving into attn dispatcher
      
      * Refactors SkyReelsV2's attention implementation
      
      * style
      
      * up
      
      * Fixes formatting in SkyReels-V2 documentation
      
      Wraps the visual demonstration section in a Markdown code block.
      
      This change corrects the rendering of ASCII diagrams and examples, improving the overall readability of the document.
      
      * Docs: Condense example arrays in skyreels_v2 guide
      
      Improves the readability of the `step_matrix` examples by replacing long sequences of repeated numbers with a more compact `value×count` notation.
      
      This change makes the underlying data patterns in the examples easier to understand at a glance.
      
      * Add _repeated_blocks attribute to SkyReelsV2Transformer3DModel
      
      * Refactor rotary embedding calculations in SkyReelsV2 to separate cosine and sine frequencies
      
      * Enhance SkyReels-V2 documentation: update model loading for GPU support and remove outdated notes
      
      * up
      
      * up
      
      * Update model_id in SkyReels-V2 documentation
      
      * up
      
      * refactor: remove device_map parameter for model loading and add pipeline.to("cuda") for GPU allocation
      
      * fix: update copyright year to 2025 in skyreels_v2.md
      
      * docs: enhance parameter examples and formatting in skyreels_v2.md
      
      * docs: update example formatting and add notes on LoRA support in skyreels_v2.md
      
      * refactor: remove copied comments from transformer_wan in SkyReelsV2 classes
      
      * Clean up comments in skyreels_v2.md
      
      Removed comments about acceleration helpers and Flash Attention installation.
      
      * Add deprecation warning for `SkyReelsV2AttnProcessor2_0` class
      5fcd5f56
  22. 25 Aug, 2025 5 commits
  23. 22 Aug, 2025 2 commits