1. 25 Nov, 2025 1 commit
    • Sayak Paul's avatar
      let's go Flux2 🚀 (#12711) · 5ffb73d4
      Sayak Paul authored
      
      
      * add vae
      
      * Initial commit for Flux 2 Transformer implementation
      
      * add pipeline part
      
      * small edits to the pipeline and conversion
      
      * update conversion script
      
      * fix
      
      * up up
      
      * finish pipeline
      
      * Remove Flux IP Adapter logic for now
      
      * Remove deprecated 3D id logic
      
      * Remove ControlNet logic for now
      
      * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block
      
      * update pipeline
      
      * Don't use biases for input projs and output AdaNorm
      
      * up
      
      * Remove bias for double stream block text QKV projections
      
      * Add script to convert Flux 2 transformer to diffusers
      
      * make style and make quality
      
      * fix a few things.
      
      * allow sft files to go.
      
      * fix image processor
      
      * fix batch
      
      * style a bit
      
      * Fix some bugs in Flux 2 transformer implementation
      
      * Fix dummy input preparation and fix some test bugs
      
      * fix dtype casting in timestep guidance module.
      
      * resolve conflicts.,
      
      * remove ip adapter stuff.
      
      * Fix Flux 2 transformer consistency test
      
      * Fix bug in Flux2TransformerBlock (double stream block)
      
      * Get remaining Flux 2 transformer tests passing
      
      * make style; make quality; make fix-copies
      
      * remove stuff.
      
      * fix type annotaton.
      
      * remove unneeded stuff from tests
      
      * tests
      
      * up
      
      * up
      
      * add sf support
      
      * Remove unused IP Adapter and ControlNet logic from transformer (#9)
      
      * copied from
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * Refactor Flux2Attention into separate classes for double stream and single stream attention
      
      * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion
      
      * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False
      
      * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion
      
      * Address review comments
      
      * Update src/diffusers/pipelines/flux2/pipeline_flux2.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * up
      
      * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12)
      
      * up
      
      * support ostris loras. (#13)
      
      * up
      
      * update schdule
      
      * up
      
      * up (#17)
      
      * add training scripts (#16)
      
      * add training scripts
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      
      * model cpu offload in validation.
      
      * add flux.2 readme
      
      * add img2img and tests
      
      * cpu offload in log validation
      
      * Apply suggestions from code review
      
      * fix
      
      * up
      
      * fixes
      
      * remove i2i training tests for now.
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      Co-authored-by: default avatarDaniel Gu <dgu8957@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal>
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      5ffb73d4
  2. 10 Sep, 2025 1 commit
  3. 13 Aug, 2025 1 commit
  4. 12 Aug, 2025 1 commit
  5. 08 Aug, 2025 1 commit
  6. 15 Jul, 2025 1 commit
  7. 11 Jul, 2025 1 commit
  8. 02 Jul, 2025 1 commit
  9. 01 Jul, 2025 1 commit
  10. 19 Jun, 2025 1 commit
  11. 14 Jun, 2025 1 commit
    • Edna's avatar
      Chroma Pipeline (#11698) · 8adc6003
      Edna authored
      
      
      * working state from hameerabbasi and iddl
      
      * working state form hameerabbasi and iddl (transformer)
      
      * working state (normalization)
      
      * working state (embeddings)
      
      * add chroma loader
      
      * add chroma to mappings
      
      * add chroma to transformer init
      
      * take out variant stuff
      
      * get decently far in changing variant stuff
      
      * add chroma init
      
      * make chroma output class
      
      * add chroma transformer to dummy tp
      
      * add chroma to init
      
      * add chroma to init
      
      * fix single file
      
      * update
      
      * update
      
      * add chroma to auto pipeline
      
      * add chroma to pipeline init
      
      * change to chroma transformer
      
      * take out variant from blocks
      
      * swap embedder location
      
      * remove prompt_2
      
      * work on swapping text encoders
      
      * remove mask function
      
      * dont modify mask (for now)
      
      * wrap attn mask
      
      * no attn mask (can't get it to work)
      
      * remove pooled prompt embeds
      
      * change to my own unpooled embeddeer
      
      * fix load
      
      * take pooled projections out of transformer
      
      * ensure correct dtype for chroma embeddings
      
      * update
      
      * use dn6 attn mask + fix true_cfg_scale
      
      * use chroma pipeline output
      
      * use DN6 embeddings
      
      * remove guidance
      
      * remove guidance embed (pipeline)
      
      * remove guidance from embeddings
      
      * don't return length
      
      * dont change dtype
      
      * remove unused stuff, fix up docs
      
      * add chroma autodoc
      
      * add .md (oops)
      
      * initial chroma docs
      
      * undo don't change dtype
      
      * undo arxiv change
      
      unsure why that happened
      
      * fix hf papers regression in more places
      
      * Update docs/source/en/api/pipelines/chroma.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * do_cfg -> self.do_classifier_free_guidance
      
      * Update docs/source/en/api/models/chroma_transformer.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update chroma.md
      
      * Move chroma layers into transformer
      
      * Remove pruned AdaLayerNorms
      
      * Add chroma fast tests
      
      * (untested) batch cond and uncond
      
      * Add # Copied from for shift
      
      * Update # Copied from statements
      
      * update norm imports
      
      * Revert cond + uncond batching
      
      * Add transformer tests
      
      * move chroma test (oops)
      
      * chroma init
      
      * fix chroma pipeline fast tests
      
      * Update src/diffusers/models/transformers/transformer_chroma.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Move Approximator and Embeddings
      
      * Fix auto pipeline + make style, quality
      
      * make style
      
      * Apply style fixes
      
      * switch to new input ids
      
      * fix # Copied from error
      
      * remove # Copied from on protected members
      
      * try to fix import
      
      * fix import
      
      * make fix-copes
      
      * revert style fix
      
      * update chroma transformer params
      
      * update chroma transformer approximator init params
      
      * update to pad tokens
      
      * fix batch inference
      
      * Make more pipeline tests work
      
      * Make most transformer tests work
      
      * fix docs
      
      * make style, make quality
      
      * skip batch tests
      
      * fix test skipping
      
      * fix test skipping again
      
      * fix for tests
      
      * Fix all pipeline test
      
      * update
      
      * push local changes, fix docs
      
      * add encoder test, remove pooled dim
      
      * default proj dim
      
      * fix tests
      
      * fix equal size list input
      
      * update
      
      * push local changes, fix docs
      
      * add encoder test, remove pooled dim
      
      * default proj dim
      
      * fix tests
      
      * fix equal size list input
      
      * Revert "fix equal size list input"
      
      This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c.
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      8adc6003
  12. 15 May, 2025 1 commit
  13. 13 May, 2025 1 commit
  14. 16 Apr, 2025 1 commit
  15. 26 Mar, 2025 1 commit
  16. 21 Mar, 2025 1 commit
  17. 10 Mar, 2025 1 commit
  18. 07 Mar, 2025 1 commit
  19. 24 Feb, 2025 1 commit
  20. 19 Feb, 2025 2 commits
  21. 12 Feb, 2025 1 commit
  22. 16 Jan, 2025 1 commit
  23. 10 Jan, 2025 1 commit
    • Daniel Hipke's avatar
      Add a `disable_mmap` option to the `from_single_file` loader to improve load... · 52c05bd4
      Daniel Hipke authored
      
      Add a `disable_mmap` option to the `from_single_file` loader to improve load performance on network mounts (#10305)
      
      * Add no_mmap arg.
      
      * Fix arg parsing.
      
      * Update another method to force no mmap.
      
      * logging
      
      * logging2
      
      * propagate no_mmap
      
      * logging3
      
      * propagate no_mmap
      
      * logging4
      
      * fix open call
      
      * clean up logging
      
      * cleanup
      
      * fix missing arg
      
      * update logging and comments
      
      * Rename to disable_mmap and update other references.
      
      * [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316)
      
      Update ltx_video.md to remove generator from `from_pretrained()`
      
      * docs: fix a mistake in docstring (#10319)
      
      Update pipeline_hunyuan_video.py
      
      docs: fix a mistake
      
      * [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306)
      
      [BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float"
      
      torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.
      
      in function prepare_latents:
      audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length
      audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length)
      ...
      audio = initial_audio_waveforms.new_zeros(audio_shape)
      
      audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * [docs] Fix quantization links (#10323)
      
      Update overview.md
      
      * [Sana]add 2K related model for Sana (#10322)
      
      add 2K related model for Sana
      
      * Update src/diffusers/loaders/single_file_model.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update src/diffusers/loaders/single_file.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * make style
      
      ---------
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarLeojc <liao_junchao@outlook.com>
      Co-authored-by: default avatarAditya Raj <syntaxticsugr@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarJunsong Chen <cjs1020440147@icloud.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      52c05bd4
  24. 08 Jan, 2025 1 commit
  25. 23 Dec, 2024 1 commit
  26. 19 Dec, 2024 1 commit
  27. 17 Dec, 2024 1 commit
  28. 12 Dec, 2024 1 commit
    • Aryan's avatar
      [core] LTX Video (#10021) · 96c376a5
      Aryan authored
      
      
      * transformer
      
      * make style & make fix-copies
      
      * transformer
      
      * add transformer tests
      
      * 80% vae
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * undo cogvideox changes
      
      * update
      
      * update
      
      * match vae
      
      * add docs
      
      * t2v pipeline working; scheduler needs to be checked
      
      * docs
      
      * add pipeline test
      
      * update
      
      * update
      
      * make fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update
      
      * copy t2v to i2v pipeline
      
      * update
      
      * apply review suggestions
      
      * update
      
      * make style
      
      * remove framewise encoding/decoding
      
      * pack/unpack latents
      
      * image2video
      
      * update
      
      * make fix-copies
      
      * update
      
      * update
      
      * rope scale fix
      
      * debug layerwise code
      
      * remove debug
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * propagate precision changes to i2v pipeline
      
      * remove downcast
      
      * address review comments
      
      * fix comment
      
      * address review comments
      
      * [Single File] LTX support for loading original weights (#10135)
      
      * from original file mixin for ltx
      
      * undo config mapping fn changes
      
      * update
      
      * add single file to pipelines
      
      * update docs
      
      * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
      
      * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
      
      * rename classes based on ltx review
      
      * point to original repository for inference
      
      * make style
      
      * resolve conflicts correctly
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      96c376a5
  29. 11 Dec, 2024 1 commit
  30. 10 Dec, 2024 1 commit
  31. 02 Dec, 2024 1 commit
  32. 07 Aug, 2024 2 commits
  33. 18 Jul, 2024 1 commit
  34. 12 Jul, 2024 1 commit
  35. 28 Jun, 2024 1 commit
  36. 18 Jun, 2024 1 commit
  37. 12 Jun, 2024 1 commit
  38. 24 May, 2024 1 commit