1. 01 Oct, 2025 1 commit
    • Sayak Paul's avatar
      [tests] cache non lora pipeline outputs. (#12298) · 814d710e
      Sayak Paul authored
      * cache non lora pipeline outputs.
      
      * up
      
      * up
      
      * up
      
      * up
      
      * Revert "up"
      
      This reverts commit 772c32e43397f25919c29bbbe8ef9dc7d581cfb8.
      
      * up
      
      * Revert "up"
      
      This reverts commit cca03df7fce55550ed28b59cadec12d1db188283.
      
      * up
      
      * up
      
      * add .
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      814d710e
  2. 30 Sep, 2025 5 commits
  3. 29 Sep, 2025 6 commits
  4. 26 Sep, 2025 3 commits
  5. 25 Sep, 2025 1 commit
  6. 24 Sep, 2025 8 commits
  7. 23 Sep, 2025 3 commits
  8. 22 Sep, 2025 7 commits
  9. 21 Sep, 2025 1 commit
  10. 20 Sep, 2025 1 commit
  11. 18 Sep, 2025 2 commits
    • Dave Lage's avatar
      Convert alphas for embedders for sd-scripts to ai toolkit conversion (#12332) · 7e7e62c6
      Dave Lage authored
      
      
      * Convert alphas for embedders for sd-scripts to ai toolkit conversion
      
      * Add kohya embedders conversion test
      
      * Apply style fixes
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      7e7e62c6
    • Fredy's avatar
      Add RequestScopedPipeline for safe concurrent inference, tokenizer lock and... · eda9ff83
      Fredy authored
      
      Add RequestScopedPipeline for safe concurrent inference, tokenizer lock and non-mutating retrieve_timesteps (#12328)
      
      * Basic implementation of request scheduling
      
      * Basic editing in SD and Flux Pipelines
      
      * Small Fix
      
      * Fix
      
      * Update for more pipelines
      
      * Add examples/server-async
      
      * Add examples/server-async
      
      * Updated RequestScopedPipeline to handle a single tokenizer lock to avoid race conditions
      
      * Fix
      
      * Fix _TokenizerLockWrapper
      
      * Fix _TokenizerLockWrapper
      
      * Delete _TokenizerLockWrapper
      
      * Fix tokenizer
      
      * Update examples/server-async
      
      * Fix server-async
      
      * Optimizations in examples/server-async
      
      * We keep the implementation simple in examples/server-async
      
      * Update examples/server-async/README.md
      
      * Update examples/server-async/README.md for changes to tokenizer locks and backward-compatible retrieve_timesteps
      
      * The changes to the diffusers core have been undone and all logic is being moved to exmaples/server-async
      
      * Update examples/server-async/utils/*
      
      * Fix BaseAsyncScheduler
      
      * Rollback in the core of the diffusers
      
      * Update examples/server-async/README.md
      
      * Complete rollback of diffusers core files
      
      * Simple implementation of an asynchronous server compatible with SD3-3.5 and Flux Pipelines
      
      * Update examples/server-async/README.md
      
      * Fixed import errors in 'examples/server-async/serverasync.py'
      
      * Flux Pipeline Discard
      
      * Update examples/server-async/README.md
      
      * Apply style fixes
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      eda9ff83
  12. 17 Sep, 2025 1 commit
    • DefTruth's avatar
      Fix many type hint errors (#12289) · efb7a299
      DefTruth authored
      * fix hidream type hint
      
      * fix hunyuan-video type hint
      
      * fix many type hint
      
      * fix many type hint errors
      
      * fix many type hint errors
      
      * fix many type hint errors
      
      * make stype & make quality
      efb7a299
  13. 16 Sep, 2025 1 commit
    • Zijian Zhou's avatar
      Fix autoencoder_kl_wan.py bugs for Wan2.2 VAE (#12335) · d06750a5
      Zijian Zhou authored
      * Update autoencoder_kl_wan.py
      
      When using the Wan2.2 VAE, the spatial compression ratio calculated here is incorrect. It should be 16 instead of 8. Pass it in directly via the config to ensure it’s correct here.
      
      * Update autoencoder_kl_wan.py
      d06750a5