- 28 Oct, 2025 3 commits
-
-
Wang, Yi authored
* fix crash in tiling mode is enabled Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * fmt Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
G.O.D authored
improve pos embed for ascend npu Co-authored-by:felix01.yu <felix01.yu@vipshop.com>
-
Lev Novitskiy authored
* add transformer pipeline first version * updates * fix 5sec generation * rewrite Kandinsky5T2VPipeline to diffusers style * add multiprompt support * remove prints in pipeline * add nabla attention * Wrap Transformer in Diffusers style * fix license * fix prompt type * add gradient checkpointing and peft support * add usage example * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * remove unused imports * add 10 second models support * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove no_grad and simplified prompt paddings * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * moved template to __init__ * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * moved sdps inside processor * remove oneline function * remove reset_dtype methods * Transformer: move all methods to forward * separated prompt encoding * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * refactoring * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * refactoring acording to https://github.com/huggingface/diffusers/commit/acabbc0033d4b4933fc651766a4aa026db2e6dc1 * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * fixed * style +copies * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
Charles <charles@huggingface.co> * more * Apply suggestions from code review * add lora loader doc * add compiled Nabla Attention * all needed changes for 10 sec models are added! * add docs * Apply style fixes * update docs * add kandinsky5 to toctree * add tests * fix tests * Apply style fixes * update tests --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Charles <charles@huggingface.co> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 27 Oct, 2025 2 commits
-
-
Mikko Lauri authored
* add aiter attention backend * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
josephrocca authored
* [Fix] Move attention mask padding after T5 embedding * [Fix] Move attention mask padding after T5 embedding * Clean up whitespace in pipeline_chroma.py Removed unnecessary blank lines for cleaner code. * Fix * Fix * Update model to final Chroma1-HD checkpoint * Update to Chroma1-HD * Update model to Chroma1-HD * Update model to Chroma1-HD * Update Chroma model links to Chroma1-HD * Add comment about padding/masking * Fix checkpoint/repo references * Apply style fixes --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 24 Oct, 2025 1 commit
-
-
YiYi Xu authored
* add hunyuanimage2.1 --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 23 Oct, 2025 1 commit
-
-
Aishwarya Badlani authored
* Fix MPS compatibility in get_1d_sincos_pos_embed_from_grid #12432 * Fix trailing whitespace in docstring * Apply style fixes --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 22 Oct, 2025 3 commits
-
-
YiYi Xu authored
add
-
Sayak Paul authored
* up * correct wording. * up * up * up
-
David Bertoin authored
* rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * make fix-copies
-
- 21 Oct, 2025 1 commit
-
-
David Bertoin authored
* Add Photon model and pipeline support This commit adds support for the Photon image generation model: - PhotonTransformer2DModel: Core transformer architecture - PhotonPipeline: Text-to-image generation pipeline - Attention processor updates for Photon-specific attention mechanism - Conversion script for loading Photon checkpoints - Documentation and tests * just store the T5Gemma encoder * enhance_vae_properties if vae is provided only * remove autocast for text encoder forwad * BF16 example * conditioned CFG * remove enhance vae and use vae.config directly when possible * move PhotonAttnProcessor2_0 in transformer_photon * remove einops dependency and now inherits from AttentionMixin * unify the structure of the forward block * update doc * update doc * fix T5Gemma loading from hub * fix timestep shift * remove lora support from doc * Rename EmbedND for PhotoEmbedND * remove modulation dataclass * put _attn_forward and _ffn_forward logic in PhotonBlock's forward * renam LastLayer for FinalLayer * remove lora related code * rename vae_spatial_compression_ratio for vae_scale_factor * support prompt_embeds in call * move xattention conditionning out computation out of the denoising loop * add negative prompts * Use _import_structure for lazy loading * make quality + style * add pipeline test + corresponding fixes * utility function that determines the default resolution given the VAE * Refactor PhotonAttention to match Flux pattern * built-in RMSNorm * Revert accidental .gitignore change * parameter names match the standard diffusers conventions * renaming and remove unecessary attributes setting * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * quantization example * added doc to toctree * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * use dispatch_attention_fn for multiple attention backend support * naming changes * make fix copy * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Add PhotonTransformer2DModel to TYPE_CHECKING imports * make fix-copies * Use Tuple instead of tuple Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * restrict the version of transformers Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * change | for Optional * fix nits. * use typing Dict --------- Co-authored-by:
davidb <davidb@worker-10.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
David Briand <david@photoroom.com> Co-authored-by:
davidb <davidb@worker-8.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
- 20 Oct, 2025 1 commit
-
-
dg845 authored
Refactor QwenEmbedRope to only use the LRU cache for RoPE caching
-
- 18 Oct, 2025 1 commit
-
-
Lev Novitskiy authored
* add kandinsky5 transformer pipeline first version --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Charles <charles@huggingface.co>
-
- 17 Oct, 2025 1 commit
-
-
Ali Imran authored
* cleanup of runway model * quality fixes
-
- 15 Oct, 2025 1 commit
-
-
Sayak Paul authored
-
- 05 Oct, 2025 1 commit
-
-
Vladimir Mandic authored
* wan fix scale_shift_factor being on cpu * apply device cast to ltx transformer * Apply style fixes --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 02 Oct, 2025 1 commit
-
-
Sayak Paul authored
conditionally import torch distributed stuff.
-
- 30 Sep, 2025 1 commit
-
-
Steven Liu authored
* change syntax * make style
-
- 25 Sep, 2025 1 commit
-
-
Lucain authored
* Support huggingface_hub 0.x and 1.x * httpx
-
- 24 Sep, 2025 2 commits
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors * update * refactor * refactor; support flash attention 2 with cp * fix * support sage attention with cp * make torch compile compatible * update * refactor * update * refactor * refactor * add ulysses backward * try to make dreambooth script work; accelerator backward not playing well * Revert "try to make dreambooth script work; accelerator backward not playing well" This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449. * workaround compilation problems with triton when doing all-to-all * support wan * handle backward correctly * support qwen * support ltx * make fix-copies * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update docs * add explanation * make fix-copies * add docstrings * support passing parallel_config to from_pretrained * apply review suggestions * make style * update * Update docs/source/en/api/parallel.md Co-authored-by:
Aryan <aryan@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
* update * update * update
-
- 23 Sep, 2025 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 22 Sep, 2025 2 commits
-
-
SahilCarterr authored
* FIxes enable_xformers_memory_efficient_attention() * Update attention.py
-
Chen Mingyi authored
-
- 17 Sep, 2025 1 commit
-
-
DefTruth authored
* fix hidream type hint * fix hunyuan-video type hint * fix many type hint * fix many type hint errors * fix many type hint errors * fix many type hint errors * make stype & make quality
-
- 16 Sep, 2025 2 commits
-
-
Zijian Zhou authored
* Update autoencoder_kl_wan.py When using the Wan2.2 VAE, the spatial compression ratio calculated here is incorrect. It should be 16 instead of 8. Pass it in directly via the config to ensure it’s correct here. * Update autoencoder_kl_wan.py
-
Samarth Agrawal authored
* fixed bug in defining embed dim * matched 1d temb process to 2d * Update src/diffusers/models/unets/unet_1d.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 03 Sep, 2025 2 commits
-
-
Ju Hoon Park authored
* Add AttentionMixin to WanVACETransformer3DModel to enable methods like `set_attn_processor()`. * Import AttentionMixin in transformer_wan_vace.py Special thanks to @tolgacangoz
🙇 ♂️ -
Sayak Paul authored
* feat: try loading fa3 using kernels when available. * up * change to Hub. * up * up * up * switch env var. * up * up * up * up * up * up
-
- 30 Aug, 2025 1 commit
-
-
Leo Jiang authored
Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 26 Aug, 2025 3 commits
-
-
Sayak Paul authored
* start removing flax stuff. * add deprecation warning. * add warning messages. * more warnings. * remove dockerfiles. * remove more. * Update src/diffusers/models/attention_flax.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Tolga Cangöz authored
* fix: update SkyReels-V2 documentation and moving into attn dispatcher * Refactors SkyReelsV2's attention implementation * style * up * Fixes formatting in SkyReels-V2 documentation Wraps the visual demonstration section in a Markdown code block. This change corrects the rendering of ASCII diagrams and examples, improving the overall readability of the document. * Docs: Condense example arrays in skyreels_v2 guide Improves the readability of the `step_matrix` examples by replacing long sequences of repeated numbers with a more compact `value×count` notation. This change makes the underlying data patterns in the examples easier to understand at a glance. * Add _repeated_blocks attribute to SkyReelsV2Transformer3DModel * Refactor rotary embedding calculations in SkyReelsV2 to separate cosine and sine frequencies * Enhance SkyReels-V2 documentation: update model loading for GPU support and remove outdated notes * up * up * Update model_id in SkyReels-V2 documentation * up * refactor: remove device_map parameter for model loading and add pipeline.to("cuda") for GPU allocation * fix: update copyright year to 2025 in skyreels_v2.md * docs: enhance parameter examples and formatting in skyreels_v2.md * docs: update example formatting and add notes on LoRA support in skyreels_v2.md * refactor: remove copied comments from transformer_wan in SkyReelsV2 classes * Clean up comments in skyreels_v2.md Removed comments about acceleration helpers and Flash Attention installation. * Add deprecation warning for `SkyReelsV2AttnProcessor2_0` class -
Leo Jiang authored
* NPU attention refactor for FLUX transformer * Apply style fixes --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 23 Aug, 2025 1 commit
-
-
Aishwarya Badlani authored
* Fix PyTorch 2.3.1 compatibility: add version guard for torch.library.custom_op - Add hasattr() check for torch.library.custom_op and register_fake - These functions were added in PyTorch 2.4, causing import failures in 2.3.1 - Both decorators and functions are now properly guarded with version checks - Maintains backward compatibility while preserving functionality Fixes #12195 * Use dummy decorators approach for PyTorch version compatibility - Replace hasattr check with version string comparison - Add no-op decorator functions for PyTorch < 2.4.0 - Follows pattern from #11941 as suggested by reviewer - Maintains cleaner code structure without indentation changes * Update src/diffusers/models/attention_dispatch.py Update all the decorator usages Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Move version check to top of file and use private naming as requested * Apply style fixes --------- Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 22 Aug, 2025 2 commits
-
-
Sayak Paul authored
-
Frank (Haofan) Wang authored
* support qwen-image-cn-union --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 20 Aug, 2025 2 commits
-
-
Sayak Paul authored
remove extra validation check in determine_device_map
-
galbria authored
* Add Bria model and pipeline to diffusers - Introduced `BriaTransformer2DModel` and `BriaPipeline` for enhanced image generation capabilities. - Updated import structures across various modules to include the new Bria components. - Added utility functions and output classes specific to the Bria pipeline. - Implemented tests for the Bria pipeline to ensure functionality and output integrity. * with working tests * style and quality pass * adding docs * add to overview * fixes from "make fix-copies" * Refactor transformer_bria.py and pipeline_bria.py: Introduce new EmbedND class for rotary position embedding, and enhance Timestep and TimestepProjEmbeddings classes. Add utility functions for handling negative prompts and generating original sigmas in pipeline_bria.py. * remove redundent and duplicates tests and fix bf16 slow test * style fixes * small doc update * Enhance Bria 3.2 documentation and implementation - Updated the GitHub repository link for Bria 3.2. - Added usage instructions for the gated model access. - Introduced the BriaTransformerBlock and BriaAttention classes to the model architecture. - Refactored existing classes to integrate Bria-specific components, including BriaEmbedND and BriaPipeline. - Updated the pipeline output class to reflect Bria-specific functionality. - Adjusted test cases to align with the new Bria model structure. * Refactor Bria model components and update documentation - Removed outdated inference example from Bria 3.2 documentation. - Introduced the BriaTransformerBlock class to enhance model architecture. - Updated attention handling to use `attention_kwargs` instead of `joint_attention_kwargs`. - Improved import structure in the Bria pipeline to handle optional dependencies. - Adjusted test cases to reflect changes in model dtype assertions. * Update Bria model reference in documentation to reflect new file naming convention * Update docs/source/en/_toctree.yml * Refactor BriaPipeline to inherit from DiffusionPipeline instead of FluxPipeline, updating imports accordingly. * move the __call__ func to the end of file * Update BriaPipeline example to use bfloat16 for precision sensitivity for better result * make style && make quality && make fix-copiessource --------- Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com>
-
- 18 Aug, 2025 2 commits
-
-
Sayak Paul authored
* fix: caching allocator behaviour for quantization. * up * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
Junyu Chen authored
* minor modification to support dc-ae-turbo * minor
-