1. 11 Jul, 2025 1 commit
  2. 10 Jul, 2025 6 commits
  3. 09 Jul, 2025 4 commits
  4. 08 Jul, 2025 2 commits
    • Aryan's avatar
      First Block Cache (#11180) · 0454fbb3
      Aryan authored
      
      
      * update
      
      * modify flux single blocks to make compatible with cache techniques (without too much model-specific intrusion code)
      
      * remove debug logs
      
      * update
      
      * cache context for different batches of data
      
      * fix hs residual bug for single return outputs; support ltx
      
      * fix controlnet flux
      
      * support flux, ltx i2v, ltx condition
      
      * update
      
      * update
      
      * Update docs/source/en/api/cache.md
      
      * Update src/diffusers/hooks/hooks.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * address review comments pt. 1
      
      * address review comments pt. 2
      
      * cache context refacotr; address review pt. 3
      
      * address review comments
      
      * metadata registration with decorators instead of centralized
      
      * support cogvideox
      
      * support mochi
      
      * fix
      
      * remove unused function
      
      * remove central registry based on review
      
      * update
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      0454fbb3
    • Dhruv Nair's avatar
      [CI] Fix big GPU test marker (#11786) · cbc8ced2
      Dhruv Nair authored
      * update
      
      * update
      cbc8ced2
  5. 07 Jul, 2025 2 commits
  6. 04 Jul, 2025 2 commits
    • Aryan's avatar
      Fix Wan AccVideo/CausVid fuse_lora (#11856) · 425a715e
      Aryan authored
      * fix
      
      * actually, better fix
      
      * empty commit; trigger tests again
      
      * mark wanvace test as flaky
      425a715e
    • Benjamin Bossan's avatar
      FIX set_lora_device when target layers differ (#11844) · 25279175
      Benjamin Bossan authored
      
      
      * FIX set_lora_device when target layers differ
      
      Resolves #11833
      
      Fixes a bug that occurs after calling set_lora_device when multiple LoRA
      adapters are loaded that target different layers.
      
      Note: Technically, the accompanying test does not require a GPU because
      the bug is triggered even if the parameters are already on the
      corresponding device, i.e. loading on CPU and then changing the device
      to CPU is sufficient to cause the bug. However, this may be optimized
      away in the future, so I decided to test with GPU.
      
      * Update docstring to warn about device mismatch
      
      * Extend docstring with an example
      
      * Fix docstring
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      25279175
  7. 02 Jul, 2025 3 commits
  8. 01 Jul, 2025 2 commits
  9. 30 Jun, 2025 3 commits
  10. 28 Jun, 2025 1 commit
  11. 27 Jun, 2025 2 commits
  12. 26 Jun, 2025 6 commits
  13. 25 Jun, 2025 1 commit
  14. 24 Jun, 2025 5 commits