- 26 Aug, 2025 4 commits
-
-
Tianqi Tang authored
Fix typos and test assertions Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* start removing flax stuff. * add deprecation warning. * add warning messages. * more warnings. * remove dockerfiles. * remove more. * Update src/diffusers/models/attention_flax.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Tolga Cangöz authored
* fix: update SkyReels-V2 documentation and moving into attn dispatcher * Refactors SkyReelsV2's attention implementation * style * up * Fixes formatting in SkyReels-V2 documentation Wraps the visual demonstration section in a Markdown code block. This change corrects the rendering of ASCII diagrams and examples, improving the overall readability of the document. * Docs: Condense example arrays in skyreels_v2 guide Improves the readability of the `step_matrix` examples by replacing long sequences of repeated numbers with a more compact `value×count` notation. This change makes the underlying data patterns in the examples easier to understand at a glance. * Add _repeated_blocks attribute to SkyReelsV2Transformer3DModel * Refactor rotary embedding calculations in SkyReelsV2 to separate cosine and sine frequencies * Enhance SkyReels-V2 documentation: update model loading for GPU support and remove outdated notes * up * up * Update model_id in SkyReels-V2 documentation * up * refactor: remove device_map parameter for model loading and add pipeline.to("cuda") for GPU allocation * fix: update copyright year to 2025 in skyreels_v2.md * docs: enhance parameter examples and formatting in skyreels_v2.md * docs: update example formatting and add notes on LoRA support in skyreels_v2.md * refactor: remove copied comments from transformer_wan in SkyReelsV2 classes * Clean up comments in skyreels_v2.md Removed comments about acceleration helpers and Flash Attention installation. * Add deprecation warning for `SkyReelsV2AttnProcessor2_0` class -
Leo Jiang authored
* NPU attention refactor for FLUX transformer * Apply style fixes --------- Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 25 Aug, 2025 1 commit
-
-
sqt authored
-
- 23 Aug, 2025 2 commits
-
-
Aryan authored
* update * update * apply review suggestions * remove guider inputs * fix tests
-
Aishwarya Badlani authored
* Fix PyTorch 2.3.1 compatibility: add version guard for torch.library.custom_op - Add hasattr() check for torch.library.custom_op and register_fake - These functions were added in PyTorch 2.4, causing import failures in 2.3.1 - Both decorators and functions are now properly guarded with version checks - Maintains backward compatibility while preserving functionality Fixes #12195 * Use dummy decorators approach for PyTorch version compatibility - Replace hasattr check with version string comparison - Add no-op decorator functions for PyTorch < 2.4.0 - Follows pattern from #11941 as suggested by reviewer - Maintains cleaner code structure without indentation changes * Update src/diffusers/models/attention_dispatch.py Update all the decorator usages Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Move version check to top of file and use private naming as requested * Apply style fixes --------- Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 22 Aug, 2025 2 commits
-
-
Sayak Paul authored
-
Frank (Haofan) Wang authored
* support qwen-image-cn-union --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 20 Aug, 2025 2 commits
-
-
Sayak Paul authored
remove extra validation check in determine_device_map
-
galbria authored
* Add Bria model and pipeline to diffusers - Introduced `BriaTransformer2DModel` and `BriaPipeline` for enhanced image generation capabilities. - Updated import structures across various modules to include the new Bria components. - Added utility functions and output classes specific to the Bria pipeline. - Implemented tests for the Bria pipeline to ensure functionality and output integrity. * with working tests * style and quality pass * adding docs * add to overview * fixes from "make fix-copies" * Refactor transformer_bria.py and pipeline_bria.py: Introduce new EmbedND class for rotary position embedding, and enhance Timestep and TimestepProjEmbeddings classes. Add utility functions for handling negative prompts and generating original sigmas in pipeline_bria.py. * remove redundent and duplicates tests and fix bf16 slow test * style fixes * small doc update * Enhance Bria 3.2 documentation and implementation - Updated the GitHub repository link for Bria 3.2. - Added usage instructions for the gated model access. - Introduced the BriaTransformerBlock and BriaAttention classes to the model architecture. - Refactored existing classes to integrate Bria-specific components, including BriaEmbedND and BriaPipeline. - Updated the pipeline output class to reflect Bria-specific functionality. - Adjusted test cases to align with the new Bria model structure. * Refactor Bria model components and update documentation - Removed outdated inference example from Bria 3.2 documentation. - Introduced the BriaTransformerBlock class to enhance model architecture. - Updated attention handling to use `attention_kwargs` instead of `joint_attention_kwargs`. - Improved import structure in the Bria pipeline to handle optional dependencies. - Adjusted test cases to reflect changes in model dtype assertions. * Update Bria model reference in documentation to reflect new file naming convention * Update docs/source/en/_toctree.yml * Refactor BriaPipeline to inherit from DiffusionPipeline instead of FluxPipeline, updating imports accordingly. * move the __call__ func to the end of file * Update BriaPipeline example to use bfloat16 for precision sensitivity for better result * make style && make quality && make fix-copiessource --------- Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com>
-
- 19 Aug, 2025 4 commits
-
-
Sayak Paul authored
* post release v0.35.0 * quality
-
naykun authored
* fix(qwen-image-edit): - update condition reshaping logic to improve editing performance * fix(qwen-image-edit): - remove _auto_resize
-
naykun authored
fix(qwen-image): shape calculation fix
-
Linoy Tsaban authored
* add alpha * load into 2nd transformer * Update src/diffusers/loaders/lora_conversion_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/loaders/lora_conversion_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * pr comments * pr comments * pr comments * fix * fix * Apply style fixes * fix copies * fix * fix copies * Update src/diffusers/loaders/lora_pipeline.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * revert change * revert change * fix copies * up * fix --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
linoy <linoy@hf.co>
-
- 18 Aug, 2025 7 commits
-
-
Sayak Paul authored
* feat: support more Qwen LoRAs from the community. * revert unrelated changes. * Revert "revert unrelated changes." This reverts commit 82dea555dc9afce1fbb4dc2323be45212ded9092.
-
Sayak Paul authored
* add clarification regarding guidance_scale in QwenImage * propagate.
-
MQY authored
- Modify offload_models function to handle DiffusionPipeline correctly - Ensure compatibility with both single and multiple module inputs Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* fix: caching allocator behaviour for quantization. * up * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
Junyu Chen authored
* minor modification to support dc-ae-turbo * minor
-
Sayak Paul authored
* add docs. * more docs. * xfail full compilation for Qwen for now. * tests * up * up * up * reviewer feedback.
-
Lambert authored
* CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks * CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks * CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks * CogView4: use local final AdaLN (no SiLU) per review; keep generic AdaLN unchanged * re-add configs as normal files (no LFS) * Apply suggestions from code review * Apply style fixes --------- Co-authored-by:
武嘉涵 <lambert@wujiahandeMacBook-Pro.local> Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 17 Aug, 2025 1 commit
-
-
naykun authored
* feat(qwen-image): add qwen-image-edit support * fix(qwen image): - compatible with torch.compile in new rope setting - fix init import - add prompt truncation in img2img and inpaint pipe - remove unused logic and comment - add copy statement - guard logic for rope video shape tuple * fix(qwen image): - make fix-copies - update doc
-
- 14 Aug, 2025 4 commits
-
-
Sayak Paul authored
* support hf_quantizer in cache warmup. * reviewer feedback * up * up
-
Sayak Paul authored
* tighten compilation tests for quantization * feat: model_info but local. * up * Revert "tighten compilation tests for quantization" This reverts commit 8d431dc967a4118168af74aae9c41f2a68764851. * up * reviewer feedback. * reviewer feedback. * up * up * empty * update --------- Co-authored-by:DN6 <dhruv.nair@gmail.com>
-
Sayak Paul authored
* feat: cuda device_map for pipelines. * up * up * empty * up
-
Sayak Paul authored
-
- 13 Aug, 2025 3 commits
-
-
Alrott SlimRG authored
* Fix bf15/fp16 for pipeline_wan_vace.py * Update pipeline_wan_vace.py * try removing xfail decorator --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
Sayak Paul authored
* checking. * checking * checking * up * up * up * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up * up * fix * review feedback. --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Nguyễn Trọng Tuấn authored
* feat/qwenimage-img2img-inpaint * Update qwenimage.md to reflect new pipelines and add # Copied from convention * tiny fix for passing ruff check * reformat code * fix copied from statement * fix copied from statement * copy and style fix * fix dummies --------- Co-authored-by:
TuanNT-ZenAI <tuannt.zenai@gmail.com> Co-authored-by:
DN6 <dhruv.nair@gmail.com>
-
- 12 Aug, 2025 4 commits
-
-
Leo Jiang authored
[Bugfix] typo error in npu FA Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Steven Liu authored
* start * draft * state, pipelineblock, apis * sequential * fix links * new * loop, auto * fix * pipeline * guiders * components manager * reviews * update * update * update --------- Co-authored-by:DN6 <dhruv.nair@gmail.com>
-
IrisRainbowNeko authored
* align meta device of from_single_file with from_pretrained * update docstr * Apply style fixes --------- Co-authored-by:
IrisRainbowNeko <rainbow-neko@outlook.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Dhruv Nair authored
update
-
- 11 Aug, 2025 4 commits
-
-
Sayak Paul authored
* update * update * update * enable compilation in qwen image. * add tests --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
Sayak Paul authored
* feat: support qwen lightning lora. * add docs. * fix
-
Sayak Paul authored
* start * encoder. * up * up * up * up * up * up
-
- 08 Aug, 2025 2 commits
-
-
Sayak Paul authored
* feat: support loading diffusers format gguf checkpoints. * update * update * qwen --------- Co-authored-by:DN6 <dhruv.nair@gmail.com>
-
YiYi Xu authored
* rearrage the params to groups: default params /image params /batch params / callback params * make style * add names property to pipeline blocks * style * remove more unused func * prepare_latents_inpaint always return noise and image_latents * up * up * update * update * update * update * update * update * update * update --------- Co-authored-by:DN6 <dhruv.nair@gmail.com>
-