"src/vscode:/vscode.git/clone" did not exist on "a584d42ce5853d160c3c1bfb5ff0f0ee65c301e6"
  1. 16 Jul, 2025 2 commits
    • Guoqing Zhu's avatar
      Fixed bug: Uncontrolled recursive calls that caused an infinite loop when... · c5d6e0b5
      Guoqing Zhu authored
      
      Fixed bug: Uncontrolled recursive calls that caused an infinite loop when loading certain pipelines containing Transformer2DModel (#11923)
      
      * fix a bug about loop call
      
      * fix a bug about loop call
      
      * ruff format
      
      ---------
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      c5d6e0b5
    • lostdisc's avatar
      Remove forced float64 from onnx stable diffusion pipelines (#11054) · 39831599
      lostdisc authored
      
      
      * Update pipeline_onnx_stable_diffusion.py to remove float64
      
      init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
      
      * Update pipeline_onnx_stable_diffusion_inpaint.py to remove float64
      
      init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
      
      * Update pipeline_onnx_stable_diffusion_upscale.py to remove float64
      
      init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
      
      * Update pipeline_onnx_stable_diffusion.py with comment for previous commit
      
      Added comment on purpose of init_noise_sigma.  This comment exists in related scripts that use the same line of code, but it was missing here.
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      39831599
  2. 15 Jul, 2025 1 commit
  3. 11 Jul, 2025 2 commits
  4. 10 Jul, 2025 6 commits
  5. 09 Jul, 2025 4 commits
  6. 08 Jul, 2025 2 commits
    • Aryan's avatar
      First Block Cache (#11180) · 0454fbb3
      Aryan authored
      
      
      * update
      
      * modify flux single blocks to make compatible with cache techniques (without too much model-specific intrusion code)
      
      * remove debug logs
      
      * update
      
      * cache context for different batches of data
      
      * fix hs residual bug for single return outputs; support ltx
      
      * fix controlnet flux
      
      * support flux, ltx i2v, ltx condition
      
      * update
      
      * update
      
      * Update docs/source/en/api/cache.md
      
      * Update src/diffusers/hooks/hooks.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * address review comments pt. 1
      
      * address review comments pt. 2
      
      * cache context refacotr; address review pt. 3
      
      * address review comments
      
      * metadata registration with decorators instead of centralized
      
      * support cogvideox
      
      * support mochi
      
      * fix
      
      * remove unused function
      
      * remove central registry based on review
      
      * update
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      0454fbb3
    • Dhruv Nair's avatar
      [CI] Fix big GPU test marker (#11786) · cbc8ced2
      Dhruv Nair authored
      * update
      
      * update
      cbc8ced2
  7. 07 Jul, 2025 2 commits
  8. 04 Jul, 2025 2 commits
    • Aryan's avatar
      Fix Wan AccVideo/CausVid fuse_lora (#11856) · 425a715e
      Aryan authored
      * fix
      
      * actually, better fix
      
      * empty commit; trigger tests again
      
      * mark wanvace test as flaky
      425a715e
    • Benjamin Bossan's avatar
      FIX set_lora_device when target layers differ (#11844) · 25279175
      Benjamin Bossan authored
      
      
      * FIX set_lora_device when target layers differ
      
      Resolves #11833
      
      Fixes a bug that occurs after calling set_lora_device when multiple LoRA
      adapters are loaded that target different layers.
      
      Note: Technically, the accompanying test does not require a GPU because
      the bug is triggered even if the parameters are already on the
      corresponding device, i.e. loading on CPU and then changing the device
      to CPU is sufficient to cause the bug. However, this may be optimized
      away in the future, so I decided to test with GPU.
      
      * Update docstring to warn about device mismatch
      
      * Extend docstring with an example
      
      * Fix docstring
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      25279175
  9. 02 Jul, 2025 3 commits
  10. 01 Jul, 2025 2 commits
  11. 30 Jun, 2025 3 commits
  12. 28 Jun, 2025 1 commit
  13. 27 Jun, 2025 2 commits
  14. 26 Jun, 2025 6 commits
  15. 25 Jun, 2025 1 commit
  16. 24 Jun, 2025 1 commit