- 24 Sep, 2025 2 commits
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors * update * refactor * refactor; support flash attention 2 with cp * fix * support sage attention with cp * make torch compile compatible * update * refactor * update * refactor * refactor * add ulysses backward * try to make dreambooth script work; accelerator backward not playing well * Revert "try to make dreambooth script work; accelerator backward not playing well" This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449. * workaround compilation problems with triton when doing all-to-all * support wan * handle backward correctly * support qwen * support ltx * make fix-copies * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update docs * add explanation * make fix-copies * add docstrings * support passing parallel_config to from_pretrained * apply review suggestions * make style * update * Update docs/source/en/api/parallel.md Co-authored-by:
Aryan <aryan@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
* update * update * update
-
- 23 Sep, 2025 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 22 Sep, 2025 2 commits
-
-
SahilCarterr authored
* FIxes enable_xformers_memory_efficient_attention() * Update attention.py
-
Chen Mingyi authored
-
- 17 Sep, 2025 1 commit
-
-
DefTruth authored
* fix hidream type hint * fix hunyuan-video type hint * fix many type hint * fix many type hint errors * fix many type hint errors * fix many type hint errors * make stype & make quality
-
- 16 Sep, 2025 2 commits
-
-
Zijian Zhou authored
* Update autoencoder_kl_wan.py When using the Wan2.2 VAE, the spatial compression ratio calculated here is incorrect. It should be 16 instead of 8. Pass it in directly via the config to ensure it’s correct here. * Update autoencoder_kl_wan.py
-
Samarth Agrawal authored
* fixed bug in defining embed dim * matched 1d temb process to 2d * Update src/diffusers/models/unets/unet_1d.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 03 Sep, 2025 2 commits
-
-
Ju Hoon Park authored
* Add AttentionMixin to WanVACETransformer3DModel to enable methods like `set_attn_processor()`. * Import AttentionMixin in transformer_wan_vace.py Special thanks to @tolgacangoz
🙇 ♂️ -
Sayak Paul authored
* feat: try loading fa3 using kernels when available. * up * change to Hub. * up * up * up * switch env var. * up * up * up * up * up * up
-
- 30 Aug, 2025 1 commit
-
-
Leo Jiang authored
Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 26 Aug, 2025 3 commits
-
-
Sayak Paul authored
* start removing flax stuff. * add deprecation warning. * add warning messages. * more warnings. * remove dockerfiles. * remove more. * Update src/diffusers/models/attention_flax.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Tolga Cangöz authored
* fix: update SkyReels-V2 documentation and moving into attn dispatcher * Refactors SkyReelsV2's attention implementation * style * up * Fixes formatting in SkyReels-V2 documentation Wraps the visual demonstration section in a Markdown code block. This change corrects the rendering of ASCII diagrams and examples, improving the overall readability of the document. * Docs: Condense example arrays in skyreels_v2 guide Improves the readability of the `step_matrix` examples by replacing long sequences of repeated numbers with a more compact `value×count` notation. This change makes the underlying data patterns in the examples easier to understand at a glance. * Add _repeated_blocks attribute to SkyReelsV2Transformer3DModel * Refactor rotary embedding calculations in SkyReelsV2 to separate cosine and sine frequencies * Enhance SkyReels-V2 documentation: update model loading for GPU support and remove outdated notes * up * up * Update model_id in SkyReels-V2 documentation * up * refactor: remove device_map parameter for model loading and add pipeline.to("cuda") for GPU allocation * fix: update copyright year to 2025 in skyreels_v2.md * docs: enhance parameter examples and formatting in skyreels_v2.md * docs: update example formatting and add notes on LoRA support in skyreels_v2.md * refactor: remove copied comments from transformer_wan in SkyReelsV2 classes * Clean up comments in skyreels_v2.md Removed comments about acceleration helpers and Flash Attention installation. * Add deprecation warning for `SkyReelsV2AttnProcessor2_0` class -
Leo Jiang authored
* NPU attention refactor for FLUX transformer * Apply style fixes --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 23 Aug, 2025 1 commit
-
-
Aishwarya Badlani authored
* Fix PyTorch 2.3.1 compatibility: add version guard for torch.library.custom_op - Add hasattr() check for torch.library.custom_op and register_fake - These functions were added in PyTorch 2.4, causing import failures in 2.3.1 - Both decorators and functions are now properly guarded with version checks - Maintains backward compatibility while preserving functionality Fixes #12195 * Use dummy decorators approach for PyTorch version compatibility - Replace hasattr check with version string comparison - Add no-op decorator functions for PyTorch < 2.4.0 - Follows pattern from #11941 as suggested by reviewer - Maintains cleaner code structure without indentation changes * Update src/diffusers/models/attention_dispatch.py Update all the decorator usages Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Move version check to top of file and use private naming as requested * Apply style fixes --------- Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 22 Aug, 2025 2 commits
-
-
Sayak Paul authored
-
Frank (Haofan) Wang authored
* support qwen-image-cn-union --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 20 Aug, 2025 2 commits
-
-
Sayak Paul authored
remove extra validation check in determine_device_map
-
galbria authored
* Add Bria model and pipeline to diffusers - Introduced `BriaTransformer2DModel` and `BriaPipeline` for enhanced image generation capabilities. - Updated import structures across various modules to include the new Bria components. - Added utility functions and output classes specific to the Bria pipeline. - Implemented tests for the Bria pipeline to ensure functionality and output integrity. * with working tests * style and quality pass * adding docs * add to overview * fixes from "make fix-copies" * Refactor transformer_bria.py and pipeline_bria.py: Introduce new EmbedND class for rotary position embedding, and enhance Timestep and TimestepProjEmbeddings classes. Add utility functions for handling negative prompts and generating original sigmas in pipeline_bria.py. * remove redundent and duplicates tests and fix bf16 slow test * style fixes * small doc update * Enhance Bria 3.2 documentation and implementation - Updated the GitHub repository link for Bria 3.2. - Added usage instructions for the gated model access. - Introduced the BriaTransformerBlock and BriaAttention classes to the model architecture. - Refactored existing classes to integrate Bria-specific components, including BriaEmbedND and BriaPipeline. - Updated the pipeline output class to reflect Bria-specific functionality. - Adjusted test cases to align with the new Bria model structure. * Refactor Bria model components and update documentation - Removed outdated inference example from Bria 3.2 documentation. - Introduced the BriaTransformerBlock class to enhance model architecture. - Updated attention handling to use `attention_kwargs` instead of `joint_attention_kwargs`. - Improved import structure in the Bria pipeline to handle optional dependencies. - Adjusted test cases to reflect changes in model dtype assertions. * Update Bria model reference in documentation to reflect new file naming convention * Update docs/source/en/_toctree.yml * Refactor BriaPipeline to inherit from DiffusionPipeline instead of FluxPipeline, updating imports accordingly. * move the __call__ func to the end of file * Update BriaPipeline example to use bfloat16 for precision sensitivity for better result * make style && make quality && make fix-copiessource --------- Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com>
-
- 18 Aug, 2025 4 commits
-
-
Sayak Paul authored
* fix: caching allocator behaviour for quantization. * up * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
Junyu Chen authored
* minor modification to support dc-ae-turbo * minor
-
Sayak Paul authored
* add docs. * more docs. * xfail full compilation for Qwen for now. * tests * up * up * up * reviewer feedback.
-
Lambert authored
* CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks * CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks * CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks * CogView4: use local final AdaLN (no SiLU) per review; keep generic AdaLN unchanged * re-add configs as normal files (no LFS) * Apply suggestions from code review * Apply style fixes --------- Co-authored-by:
武嘉涵 <lambert@wujiahandeMacBook-Pro.local> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 17 Aug, 2025 1 commit
-
-
naykun authored
* feat(qwen-image): add qwen-image-edit support * fix(qwen image): - compatible with torch.compile in new rope setting - fix init import - add prompt truncation in img2img and inpaint pipe - remove unused logic and comment - add copy statement - guard logic for rope video shape tuple * fix(qwen image): - make fix-copies - update doc
-
- 14 Aug, 2025 2 commits
-
-
Sayak Paul authored
* support hf_quantizer in cache warmup. * reviewer feedback * up * up
-
Sayak Paul authored
-
- 13 Aug, 2025 1 commit
-
-
Sayak Paul authored
* checking. * checking * checking * up * up * up * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up * up * fix * review feedback. --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 12 Aug, 2025 1 commit
-
-
Leo Jiang authored
[Bugfix] typo error in npu FA Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 11 Aug, 2025 1 commit
-
-
Sayak Paul authored
* update * update * update * enable compilation in qwen image. * add tests --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
- 07 Aug, 2025 1 commit
-
-
DefTruth authored
fix-flux-type-hint
-
- 04 Aug, 2025 4 commits
-
-
naykun authored
* fix(qwen-image): - update vae license * Apply style fixes --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Samuel Tesfai authored
* Cross attention module to Wan Attention * Apply style fixes --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Aryan authored
* update * update * update * add docs
-
YiYi Xu authored
* up --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 03 Aug, 2025 1 commit
-
-
naykun authored
* (feat): qwen-image integration * fix(qwen-image): - remove unused logics related to controlnet/ip-adapter * fix(qwen-image): - compatible with attention dispatcher - cond cache support * fix(qwen-image): - cond cache registry - attention backend argument - fix copies * fix(qwen-image): - remove local test * Update src/diffusers/models/transformers/transformer_qwenimage.py --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 02 Aug, 2025 2 commits
-
-
Tanuj Rai authored
* Update autoencoder_kl_cosmos.py * Apply style fixes --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Bernd Doser authored
-
- 01 Aug, 2025 1 commit
-
-
YiYi Xu authored
up
-
- 30 Jul, 2025 1 commit
-
-
Sayak Paul authored
* support attention backends for lTX * Apply suggestions from code review Co-authored-by:
Aryan <aryan@huggingface.co> * reviewer feedback. --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 29 Jul, 2025 1 commit
-
-
Álvaro Somoza authored
* login * more logins * uploads * missed login * another missed login * downloads * examples and more logins * fix * setup * Apply style fixes * fix * Apply style fixes
-