"test/srt/models/test_encoder_embedding_models.py" did not exist on "7514b9f8d3660417c085538076cf5162f32ce2fb"
  1. 08 Aug, 2025 2 commits
  2. 05 Aug, 2025 1 commit
  3. 29 Jul, 2025 1 commit
  4. 10 Jul, 2025 1 commit
  5. 09 Jul, 2025 1 commit
  6. 08 Jul, 2025 1 commit
  7. 04 Jul, 2025 2 commits
    • Aryan's avatar
      Fix Wan AccVideo/CausVid fuse_lora (#11856) · 425a715e
      Aryan authored
      * fix
      
      * actually, better fix
      
      * empty commit; trigger tests again
      
      * mark wanvace test as flaky
      425a715e
    • Benjamin Bossan's avatar
      FIX set_lora_device when target layers differ (#11844) · 25279175
      Benjamin Bossan authored
      
      
      * FIX set_lora_device when target layers differ
      
      Resolves #11833
      
      Fixes a bug that occurs after calling set_lora_device when multiple LoRA
      adapters are loaded that target different layers.
      
      Note: Technically, the accompanying test does not require a GPU because
      the bug is triggered even if the parameters are already on the
      corresponding device, i.e. loading on CPU and then changing the device
      to CPU is sufficient to cause the bug. However, this may be optimized
      away in the future, so I decided to test with GPU.
      
      * Update docstring to warn about device mismatch
      
      * Extend docstring with an example
      
      * Fix docstring
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      25279175
  8. 02 Jul, 2025 1 commit
  9. 30 Jun, 2025 1 commit
  10. 28 Jun, 2025 1 commit
  11. 27 Jun, 2025 1 commit
  12. 25 Jun, 2025 1 commit
  13. 19 Jun, 2025 1 commit
  14. 18 Jun, 2025 1 commit
  15. 13 Jun, 2025 1 commit
  16. 30 May, 2025 1 commit
  17. 27 May, 2025 1 commit
  18. 22 May, 2025 1 commit
  19. 06 May, 2025 1 commit
  20. 15 Apr, 2025 2 commits
    • Sayak Paul's avatar
      post release 0.33.0 (#11255) · 4b868f14
      Sayak Paul authored
      
      
      * post release
      
      * update
      
      * fix deprecations
      
      * remaining
      
      * update
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      4b868f14
    • Hameer Abbasi's avatar
      [LoRA] Add LoRA support to AuraFlow (#10216) · 9352a5ca
      Hameer Abbasi authored
      
      
      * Add AuraFlowLoraLoaderMixin
      
      * Add comments, remove qkv fusion
      
      * Add Tests
      
      * Add AuraFlowLoraLoaderMixin to documentation
      
      * Add Suggested changes
      
      * Change attention_kwargs->joint_attention_kwargs
      
      * Rebasing derp.
      
      * fix
      
      * fix
      
      * Quality fixes.
      
      * make style
      
      * `make fix-copies`
      
      * `ruff check --fix`
      
      * Attept 1 to fix tests.
      
      * Attept 2 to fix tests.
      
      * Attept 3 to fix tests.
      
      * Address review comments.
      
      * Rebasing derp.
      
      * Get more tests passing by copying from Flux. Address review comments.
      
      * `joint_attention_kwargs`->`attention_kwargs`
      
      * Add `lora_scale` property for te LoRAs.
      
      * Make test better.
      
      * Remove useless property.
      
      * Skip TE-only tests for AuraFlow.
      
      * Support LoRA for non-CLIP TEs.
      
      * Restore LoRA tests.
      
      * Undo adding LoRA support for non-CLIP TEs.
      
      * Undo support for TE in AuraFlow LoRA.
      
      * `make fix-copies`
      
      * Sync with upstream changes.
      
      * Remove unneeded stuff.
      
      * Mirror `Lumina2`.
      
      * Skip for MPS.
      
      * Address review comments.
      
      * Remove duplicated code.
      
      * Remove unnecessary code.
      
      * Remove repeated docs.
      
      * Propagate attention.
      
      * Fix TE target modules.
      
      * MPS fix for LoRA tests.
      
      * Unrelated TE LoRA tests fix.
      
      * Fix AuraFlow LoRA tests by applying to the right denoiser layers.
      Co-authored-by: default avatarAstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com>
      
      * Apply style fixes
      
      * empty commit
      
      * Fix the repo consistency issues.
      
      * Remove unrelated changes.
      
      * Style.
      
      * Fix `test_lora_fuse_nan`.
      
      * fix quality issues.
      
      * `pytest.xfail` -> `ValueError`.
      
      * Add back `skip_mps`.
      
      * Apply style fixes
      
      * `make fix-copies`
      
      ---------
      Co-authored-by: default avatarWarlord-K <warlordk28@gmail.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarAstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      9352a5ca
  21. 10 Apr, 2025 2 commits
  22. 12 Mar, 2025 1 commit
  23. 10 Mar, 2025 2 commits
  24. 04 Mar, 2025 2 commits
    • Aryan's avatar
      [LoRA] Support Wan (#10943) · 3ee899fa
      Aryan authored
      * update
      
      * refactor image-to-video pipeline
      
      * update
      
      * fix copied from
      
      * use FP32LayerNorm
      3ee899fa
    • Fanli Lin's avatar
      [tests] make tests device-agnostic (part 4) (#10508) · 7855ac59
      Fanli Lin authored
      
      
      * initial comit
      
      * fix empty cache
      
      * fix one more
      
      * fix style
      
      * update device functions
      
      * update
      
      * update
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/controlnet/test_controlnet.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/controlnet/test_controlnet.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * with gc.collect
      
      * update
      
      * make style
      
      * check_torch_dependencies
      
      * add mps empty cache
      
      * add changes
      
      * bug fix
      
      * enable on xpu
      
      * update more cases
      
      * revert
      
      * revert back
      
      * Update test_stable_diffusion_xl.py
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * add test marker
      
      ---------
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      7855ac59
  25. 26 Feb, 2025 1 commit
  26. 20 Feb, 2025 1 commit
  27. 19 Feb, 2025 1 commit
    • Sayak Paul's avatar
      [LoRA] make `set_adapters()` robust on silent failures. (#9618) · 6fe05b9b
      Sayak Paul authored
      * make set_adapters() robust on silent failures.
      
      * fixes to tests
      
      * flaky decorator.
      
      * fix
      
      * flaky to sd3.
      
      * remove warning.
      
      * sort
      
      * quality
      
      * skip test_simple_inference_with_text_denoiser_multi_adapter_block_lora
      
      * skip testing unsupported features.
      
      * raise warning instead of error.
      6fe05b9b
  28. 13 Feb, 2025 1 commit
    • Aryan's avatar
      Disable PEFT input autocast when using fp8 layerwise casting (#10685) · a0c22997
      Aryan authored
      * disable peft input autocast
      
      * use new peft method name; only disable peft input autocast if submodule layerwise casting active
      
      * add test; reference PeftInputAutocastDisableHook in peft docs
      
      * add load_lora_weights test
      
      * casted -> cast
      
      * Update tests/lora/utils.py
      a0c22997
  29. 22 Jan, 2025 1 commit
    • Aryan's avatar
      [core] Layerwise Upcasting (#10347) · beacaa55
      Aryan authored
      
      
      * update
      
      * update
      
      * make style
      
      * remove dynamo disable
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * update
      
      * update
      
      * update
      
      * update mixin
      
      * add some basic tests
      
      * update
      
      * update
      
      * non_blocking
      
      * improvements
      
      * update
      
      * norm.* -> norm
      
      * apply suggestions from review
      
      * add example
      
      * update hook implementation to the latest changes from pyramid attention broadcast
      
      * deinitialize should raise an error
      
      * update doc page
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update docs
      
      * update
      
      * refactor
      
      * fix _always_upcast_modules for asym ae and vq_model
      
      * fix lumina embedding forward to not depend on weight dtype
      
      * refactor tests
      
      * add simple lora inference tests
      
      * _always_upcast_modules -> _precision_sensitive_module_patterns
      
      * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
      
      * check layer dtypes in lora test
      
      * fix UNet1DModelTests::test_layerwise_upcasting_inference
      
      * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
      
      * skip test in NCSNppModelTests
      
      * skip tests for AutoencoderTinyTests
      
      * skip tests for AutoencoderOobleckTests
      
      * skip tests for UNet1DModelTests - unsupported pytorch operations
      
      * layerwise_upcasting -> layerwise_casting
      
      * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
      
      * add layerwise fp8 pipeline test
      
      * use xfail
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
      
      * add note about memory consumption on tesla CI runner for failing test
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      beacaa55
  30. 10 Jan, 2025 2 commits
  31. 07 Jan, 2025 1 commit
  32. 06 Jan, 2025 2 commits