1. 08 Jul, 2025 1 commit
    • Aryan's avatar
      First Block Cache (#11180) · 0454fbb3
      Aryan authored
      
      
      * update
      
      * modify flux single blocks to make compatible with cache techniques (without too much model-specific intrusion code)
      
      * remove debug logs
      
      * update
      
      * cache context for different batches of data
      
      * fix hs residual bug for single return outputs; support ltx
      
      * fix controlnet flux
      
      * support flux, ltx i2v, ltx condition
      
      * update
      
      * update
      
      * Update docs/source/en/api/cache.md
      
      * Update src/diffusers/hooks/hooks.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * address review comments pt. 1
      
      * address review comments pt. 2
      
      * cache context refacotr; address review pt. 3
      
      * address review comments
      
      * metadata registration with decorators instead of centralized
      
      * support cogvideox
      
      * support mochi
      
      * fix
      
      * remove unused function
      
      * remove central registry based on review
      
      * update
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      0454fbb3
  2. 19 Jun, 2025 1 commit
  3. 14 Feb, 2025 1 commit
    • Aryan's avatar
      Module Group Offloading (#10503) · 9a147b82
      Aryan authored
      
      
      * update
      
      * fix
      
      * non_blocking; handle parameters and buffers
      
      * update
      
      * Group offloading with cuda stream prefetching (#10516)
      
      * cuda stream prefetch
      
      * remove breakpoints
      
      * update
      
      * copy model hook implementation from pab
      
      * update; ~very workaround based implementation but it seems to work as expected; needs cleanup and rewrite
      
      * more workarounds to make it actually work
      
      * cleanup
      
      * rewrite
      
      * update
      
      * make sure to sync current stream before overwriting with pinned params
      
      not doing so will lead to erroneous computations on the GPU and cause bad results
      
      * better check
      
      * update
      
      * remove hook implementation to not deal with merge conflict
      
      * re-add hook changes
      
      * why use more memory when less memory do trick
      
      * why still use slightly more memory when less memory do trick
      
      * optimise
      
      * add model tests
      
      * add pipeline tests
      
      * update docs
      
      * add layernorm and groupnorm
      
      * address review comments
      
      * improve tests; add docs
      
      * improve docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * apply suggestions from code review
      
      * update tests
      
      * apply suggestions from review
      
      * enable_group_offloading -> enable_group_offload for naming consistency
      
      * raise errors if multiple offloading strategies used; add relevant tests
      
      * handle .to() when group offload applied
      
      * refactor some repeated code
      
      * remove unintentional change from merge conflict
      
      * handle .cuda()
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      9a147b82
  4. 22 Jan, 2025 1 commit
    • Aryan's avatar
      [core] Layerwise Upcasting (#10347) · beacaa55
      Aryan authored
      
      
      * update
      
      * update
      
      * make style
      
      * remove dynamo disable
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * update
      
      * update
      
      * update
      
      * update mixin
      
      * add some basic tests
      
      * update
      
      * update
      
      * non_blocking
      
      * improvements
      
      * update
      
      * norm.* -> norm
      
      * apply suggestions from review
      
      * add example
      
      * update hook implementation to the latest changes from pyramid attention broadcast
      
      * deinitialize should raise an error
      
      * update doc page
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update docs
      
      * update
      
      * refactor
      
      * fix _always_upcast_modules for asym ae and vq_model
      
      * fix lumina embedding forward to not depend on weight dtype
      
      * refactor tests
      
      * add simple lora inference tests
      
      * _always_upcast_modules -> _precision_sensitive_module_patterns
      
      * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
      
      * check layer dtypes in lora test
      
      * fix UNet1DModelTests::test_layerwise_upcasting_inference
      
      * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
      
      * skip test in NCSNppModelTests
      
      * skip tests for AutoencoderTinyTests
      
      * skip tests for AutoencoderOobleckTests
      
      * skip tests for UNet1DModelTests - unsupported pytorch operations
      
      * layerwise_upcasting -> layerwise_casting
      
      * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
      
      * add layerwise fp8 pipeline test
      
      * use xfail
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
      
      * add note about memory consumption on tesla CI runner for failing test
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      beacaa55
  5. 23 Dec, 2024 1 commit
    • Aryan's avatar
      [core] LTX Video 0.9.1 (#10330) · 4b557132
      Aryan authored
      * update
      
      * make style
      
      * update
      
      * update
      
      * update
      
      * make style
      
      * single file related changes
      
      * update
      
      * fix
      
      * update single file urls and docs
      
      * update
      
      * fix
      4b557132
  6. 12 Dec, 2024 1 commit
    • Aryan's avatar
      [core] LTX Video (#10021) · 96c376a5
      Aryan authored
      
      
      * transformer
      
      * make style & make fix-copies
      
      * transformer
      
      * add transformer tests
      
      * 80% vae
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * undo cogvideox changes
      
      * update
      
      * update
      
      * match vae
      
      * add docs
      
      * t2v pipeline working; scheduler needs to be checked
      
      * docs
      
      * add pipeline test
      
      * update
      
      * update
      
      * make fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update
      
      * copy t2v to i2v pipeline
      
      * update
      
      * apply review suggestions
      
      * update
      
      * make style
      
      * remove framewise encoding/decoding
      
      * pack/unpack latents
      
      * image2video
      
      * update
      
      * make fix-copies
      
      * update
      
      * update
      
      * rope scale fix
      
      * debug layerwise code
      
      * remove debug
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * propagate precision changes to i2v pipeline
      
      * remove downcast
      
      * address review comments
      
      * fix comment
      
      * address review comments
      
      * [Single File] LTX support for loading original weights (#10135)
      
      * from original file mixin for ltx
      
      * undo config mapping fn changes
      
      * update
      
      * add single file to pipelines
      
      * update docs
      
      * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
      
      * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
      
      * rename classes based on ltx review
      
      * point to original repository for inference
      
      * make style
      
      * resolve conflicts correctly
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      96c376a5
  7. 05 Nov, 2024 1 commit
    • Aryan's avatar
      [core] Mochi T2V (#9769) · 3f329a42
      Aryan authored
      
      
      * update
      
      * udpate
      
      * update transformer
      
      * make style
      
      * fix
      
      * add conversion script
      
      * update
      
      * fix
      
      * update
      
      * fix
      
      * update
      
      * fixes
      
      * make style
      
      * update
      
      * update
      
      * update
      
      * init
      
      * update
      
      * update
      
      * add
      
      * up
      
      * up
      
      * up
      
      * update
      
      * mochi transformer
      
      * remove original implementation
      
      * make style
      
      * update inits
      
      * update conversion script
      
      * docs
      
      * Update src/diffusers/pipelines/mochi/pipeline_mochi.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update src/diffusers/pipelines/mochi/pipeline_mochi.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * fix docs
      
      * pipeline fixes
      
      * make style
      
      * invert sigmas in scheduler; fix pipeline
      
      * fix pipeline num_frames
      
      * flip proj and gate in swiglu
      
      * make style
      
      * fix
      
      * make style
      
      * fix tests
      
      * latent mean and std fix
      
      * update
      
      * cherry-pick 1069d210e1b9e84a366cdc7a13965626ea258178
      
      * remove additional sigma already handled by flow match scheduler
      
      * fix
      
      * remove hardcoded value
      
      * replace conv1x1 with linear
      
      * Update src/diffusers/pipelines/mochi/pipeline_mochi.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * framewise decoding and conv_cache
      
      * make style
      
      * Apply suggestions from code review
      
      * mochi vae encoder changes
      
      * rebase correctly
      
      * Update scripts/convert_mochi_to_diffusers.py
      
      * fix tests
      
      * fixes
      
      * make style
      
      * update
      
      * make style
      
      * update
      
      * add framewise and tiled encoding
      
      * make style
      
      * make original vae implementation behaviour the default; note: framewise encoding does not work
      
      * remove framewise encoding implementation due to presence of attn layers
      
      * fight test 1
      
      * fight test 2
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      3f329a42
  8. 29 Oct, 2024 1 commit
  9. 17 Sep, 2024 1 commit
  10. 06 Sep, 2024 1 commit
  11. 23 Aug, 2024 1 commit
  12. 22 Aug, 2024 1 commit
  13. 13 Aug, 2024 1 commit
    • Aryan's avatar
      [refactor] CogVideoX followups + tiled decoding support (#9150) · a85b34e7
      Aryan authored
      * refactor context parallel cache; update torch compile time benchmark
      
      * add tiling support
      
      * make style
      
      * remove num_frames % 8 == 0 requirement
      
      * update default num_frames to original value
      
      * add explanations + refactor
      
      * update torch compile example
      
      * update docs
      
      * update
      
      * clean up if-statements
      
      * address review comments
      
      * add test for vae tiling
      
      * update docs
      
      * update docs
      
      * update docstrings
      
      * add modeling test for cogvideox transformer
      
      * make style
      a85b34e7
  14. 07 Aug, 2024 1 commit
  15. 11 Jul, 2024 1 commit
    • Xin Ma's avatar
      Latte: Latent Diffusion Transformer for Video Generation (#8404) · b8cf84a3
      Xin Ma authored
      
      
      * add Latte to diffusers
      
      * remove print
      
      * remove print
      
      * remove print
      
      * remove unuse codes
      
      * remove layer_norm_latte and add a flag
      
      * remove layer_norm_latte and add a flag
      
      * update latte_pipeline
      
      * update latte_pipeline
      
      * remove unuse squeeze
      
      * add norm_hidden_states.ndim == 2: # for Latte
      
      * fixed test latte pipeline bugs
      
      * fixed test latte pipeline bugs
      
      * delete sh
      
      * add doc for latte
      
      * add licensing
      
      * Move Transformer3DModelOutput to modeling_outputs
      
      * give a default value to sample_size
      
      * remove the einops dependency
      
      * change norm2 for latte
      
      * modify pipeline of latte
      
      * update test for Latte
      
      * modify some codes for latte
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * video_length -> num_frames; update prepare_latents copied from
      
      * make fix-copies
      
      * make style
      
      * typo: videe -> video
      
      * update
      
      * modify for Latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify for Latte pipeline
      
      * Delete .vscode directory
      
      * make style
      
      * make fix-copies
      
      * add latte transformer 3d to docs _toctree.yml
      
      * update example
      
      * reduce frames for test
      
      * fixed bug of _text_preprocessing
      
      * set num frame to 1 for testing
      
      * remove unuse print
      
      * add text = self._clean_caption(text) again
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarAryan <contact.aryanvs@gmail.com>
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      b8cf84a3