- 03 Dec, 2025 8 commits
-
-
Sayak Paul authored
* start zimage model tests. * up * up * up * up * up * up * up * up * up * up * up * up * Revert "up" This reverts commit bca3e27c96b942db49ccab8ddf824e7a54d43ed1. * expand upon compilation failure reason. * Update tests/models/transformers/test_models_transformer_z_image.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * reinitialize the padding tokens to ones to prevent NaN problems. * updates * up * skipping ZImage DiT tests * up * up --------- Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com>
-
Sayak Paul authored
fix hunuyanvideo 1.5 offloading tests.
-
Aditya Borate authored
* Fix(peft): Re-apply group offloading after deleting adapters * Test: Add regression test for group offloading + delete_adapters * Test: Add assertions to verify output changes after deletion * Test: Add try/finally to clean up group offloading hooks --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Lev Novitskiy authored
* add transformer pipeline first version --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Charles <charles@huggingface.co> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
dmitrienkoae <dmitrienko.ae@phystech.edu> Co-authored-by:
nvvaulin <nvvaulin@gmail.com>
-
Dhruv Nair authored
* update * update * Revert "update" This reverts commit 73906381ab76da96eb8f9b841177cd4f49861eb1. * Revert "update" This reverts commit 21a03f93ef0fbfa5f7a7d97708f75149b1d1b3b0. * update * update * update * update * update
-
Sayak Paul authored
* remove attn_processors property * more * up * up more. * up * add AttentionMixin to AuraFlow. * up * up * up * up
-
Sayak Paul authored
* start varlen variants for attn backend kernels. * maybe unflatten heads. * updates * remove unused function. * doc * up
-
Kimbing Ng authored
* Fixes #12673. Wrong default_stream is used. leading to wrong execution order when record_steram is enabled. * update * Update test --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Dec, 2025 3 commits
-
-
Jerry Wu authored
* Refactor image padding logic to pervent zero tensor in transformer_z_image.py * Apply style fixes * Add more support to fix repeat bug on tpu devices. * Fix for dynamo compile error for multi if-branches. --------- Co-authored-by:
Mingjia Li <mingjiali@tju.edu.cn> Co-authored-by:
Mingjia Li <mail@mingjia.li> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Guo-Hua Wang authored
* add ovis_image * fix code quality * optimize pipeline_ovis_image.py according to the feedbacks * optimize imports * add docs * make style * make style * add ovis to toctree * oops --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
CalamitousFelicitousness authored
* Add ZImage LoRA support and integrate into ZImagePipeline * Add LoRA test for Z-Image * Move the LoRA test * Fix ZImage LoRA scale support and test configuration * Add ZImage LoRA test overrides for architecture differences - Override test_lora_fuse_nan to use ZImage's 'layers' attribute instead of 'transformer_blocks' - Skip block-level LoRA scaling test (not supported in ZImage) - Add required imports: numpy, torch_device, check_if_lora_correctly_set * Add ZImageLoraLoaderMixin to LoRA documentation * Use conditional import for peft.LoraConfig in ZImage tests * Override test_correct_lora_configs_with_different_ranks for ZImage ZImage uses 'attention.to_k' naming convention instead of 'attn.to_k', so the base test's module name search loop never finds a match. This override uses the correct naming pattern for ZImage architecture. * Add is_flaky decorator to ZImage LoRA tests initialise padding tokens * Skip ZImage LoRA test class entirely Skip the entire ZImageLoRATests class due to non-deterministic behavior from complex64 RoPE operations and torch.empty padding tokens. LoRA functionality works correctly with real models. Clean up removed: - Individual @unittest.skip decorators - @is_flaky decorator overrides for inherited methods - Custom test method overrides - Global torch deterministic settings - Unused imports (numpy, is_flaky, check_if_lora_correctly_set) --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com>
-
- 01 Dec, 2025 8 commits
-
-
Sayak Paul authored
* feat: implement caption upsampling for flux.2. * doc * up * fix * up * fix system prompts
🤷 * up * up * up -
Sayak Paul authored
* Update bria_fibo.md with minor fixes * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Gal Davidi authored
-
DefTruth authored
-
David El Malih authored
refactor: add type hints to methods and update docstrings for parameters.
-
David El Malih authored
refactor: improve type hints for `beta_schedule`, `prediction_type`, and `timestep_spacing` parameters, and add return type hints to several methods.
-
David El Malih authored
docs: Update Imagen Video paper link in scheduler docstrings.
-
YiYi Xu authored
* add --------- Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 29 Nov, 2025 1 commit
-
-
DefTruth authored
* allow type-check for ZImageTransformer2DModel * make fix-copies
-
- 28 Nov, 2025 2 commits
-
-
Dhruv Nair authored
* update * update * update * update * Apply style fixes * update * update * update * update * update --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Ayush Sur authored
* Fix examples not loading LoRA adapter weights from checkpoint * Updated lora saving logic with accelerate save_model_hook and load_model_hook * Formatted the changes using ruff * import and upcasting changed --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Nov, 2025 2 commits
-
-
Sayak Paul authored
up
-
Sayak Paul authored
remove torch.save from remnant code.
-
- 26 Nov, 2025 6 commits
-
-
Jerry Wu authored
* Add Support for Z-Image. * Reformatting with make style, black & isort. * Remove init, Modify import utils, Merge forward in transformers block, Remove once func in pipeline. * modified main model forward, freqs_cis left * refactored to add B dim * fixed stack issue * fixed modulation bug * fixed modulation bug * fix bug * remove value_from_time_aware_config * styling * Fix neg embed and devide / bug; Reuse pad zero tensor; Turn cat -> repeat; Add hint for attn processor. * Replace padding with pad_sequence; Add gradient checkpointing. * Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that. * Fix Docstring and Make Style. * Revert "Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that." This reverts commit fbf26b7ed11d55146103c97740bad4a5f91744e0. * update z-image docstring * Revert attention dispatcher * update z-image docstring * styling * Recover attention_dispatch.py with its origin impl, later would special commit for fa3 compatibility. * Fix prev bug, and support for prompt_embeds pass in args after prompt pre-encode as List of torch Tensor. * Remove einop dependency. * remove redundant imports & make fix-copies * fix import * Support for num_images_per_prompt>1; Remove redundant unquote variables. * Fix bugs for num_images_per_prompt with actual batch. * Add unit tests for Z-Image. * Refine unitest and skip for cases needed separate test env; Fix compatibility with unitest in model, mostly precision formating. * Add clean env for test_save_load_float16 separ test; Add Note; Styling. * Update dtype mentioned by yiyi. --------- Co-authored-by:liudongyang <liudongyang0114@gmail.com>
-
David El Malih authored
* Improve docstrings and type hints in multiple diffusion schedulers * docs: update Imagen Video paper link to Hugging Face Papers.
-
Sayak Paul authored
put autopipeline after overview and hunyuanimage in images
-
Sayak Paul authored
* fix links * up
-
Sayak Paul authored
* up * Update tests/lora/test_lora_layers_flux2.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> --------- Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com>
-
Andrei Filatov authored
-
- 25 Nov, 2025 3 commits
-
-
Sayak Paul authored
* add vae * Initial commit for Flux 2 Transformer implementation * add pipeline part * small edits to the pipeline and conversion * update conversion script * fix * up up * finish pipeline * Remove Flux IP Adapter logic for now * Remove deprecated 3D id logic * Remove ControlNet logic for now * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block * update pipeline * Don't use biases for input projs and output AdaNorm * up * Remove bias for double stream block text QKV projections * Add script to convert Flux 2 transformer to diffusers * make style and make quality * fix a few things. * allow sft files to go. * fix image processor * fix batch * style a bit * Fix some bugs in Flux 2 transformer implementation * Fix dummy input preparation and fix some test bugs * fix dtype casting in timestep guidance module. * resolve conflicts., * remove ip adapter stuff. * Fix Flux 2 transformer consistency test * Fix bug in Flux2TransformerBlock (double stream block) * Get remaining Flux 2 transformer tests passing * make style; make quality; make fix-copies * remove stuff. * fix type annotaton. * remove unneeded stuff from tests * tests * up * up * add sf support * Remove unused IP Adapter and ControlNet logic from transformer (#9) * copied from * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> * up * up * up * up * up * Refactor Flux2Attention into separate classes for double stream and single stream attention * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion * Address review comments * Update src/diffusers/pipelines/flux2/pipeline_flux2.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * up * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12) * up * support ostris loras. (#13) * up * update schdule * up * up (#17) * add training scripts (#16) * add training scripts Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> * model cpu offload in validation. * add flux.2 readme * add img2img and tests * cpu offload in log validation * Apply suggestions from code review * fix * up * fixes * remove i2i training tests for now. --------- Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by:
linoytsaban <linoy@huggingface.co> * up --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
Daniel Gu <dgu8957@gmail.com> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by:
linoytsaban <linoy@huggingface.co>
-
Jerry Wu authored
* Add Support for Z-Image. * Reformatting with make style, black & isort. * Remove init, Modify import utils, Merge forward in transformers block, Remove once func in pipeline. * modified main model forward, freqs_cis left * refactored to add B dim * fixed stack issue * fixed modulation bug * fixed modulation bug * fix bug * remove value_from_time_aware_config * styling * Fix neg embed and devide / bug; Reuse pad zero tensor; Turn cat -> repeat; Add hint for attn processor. * Replace padding with pad_sequence; Add gradient checkpointing. * Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that. * Fix Docstring and Make Style. * Revert "Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that." This reverts commit fbf26b7ed11d55146103c97740bad4a5f91744e0. * update z-image docstring * Revert attention dispatcher * update z-image docstring * styling * Recover attention_dispatch.py with its origin impl, later would special commit for fa3 compatibility. * Fix prev bug, and support for prompt_embeds pass in args after prompt pre-encode as List of torch Tensor. * Remove einop dependency. * remove redundant imports & make fix-copies * fix import --------- Co-authored-by:liudongyang <liudongyang0114@gmail.com>
-
Junsong Chen authored
* fix typo in docs * Update docs/source/en/api/pipelines/sana_video.md Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> --------- Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com>
-
- 24 Nov, 2025 5 commits
-
-
sq authored
- Fixed variable naming typos (maskkk -> mask_fill, mask_imagee -> mask_image_fill, masked_imagee -> masked_image_fill, masked_image_latentsss -> masked_latents_fill) These changes improve code readability without affecting functionality.
-
cdutr authored
* Updates Portuguese documentation for Diffusers library Enhances the Portuguese documentation with: - Restructured table of contents for improved navigation - Added placeholder page for in-translation content - Refined language and improved readability in existing pages - Introduced a new page on basic Stable Diffusion performance guidance Improves overall documentation structure and user experience for Portuguese-speaking users * Removes untranslated sections from Portuguese documentation Cleans up the Portuguese documentation table of contents by removing placeholder sections marked as "Em tradução" (In translation) Removes the in_translation.md file and associated table of contents entries for sections that are not yet translated, improving documentation clarity
-
Sayak Paul authored
* up * support automatic dispatch. * disable compile support for now./ * up * flash too. * document. * up * up * up * up
-
DefTruth authored
* bugfix: fix chrono-edit context parallel * bugfix: fix chrono-edit context parallel * Update src/diffusers/models/transformers/transformer_chronoedit.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update src/diffusers/models/transformers/transformer_chronoedit.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Clean up comments in transformer_chronoedit.py Removed unnecessary comments regarding parallelization in cross-attention. * fix style * fix qc --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
SwayStar123 authored
Update pipeline_bria_fibo.py
-
- 21 Nov, 2025 1 commit
-
-
David El Malih authored
* Enhance type hints and docstrings in LMSDiscreteScheduler class Updated type hints for function parameters and return types to improve code clarity and maintainability. Enhanced docstrings for several methods, providing clearer descriptions of their functionality and expected arguments. Notable changes include specifying Literal types for certain parameters and ensuring consistent return type annotations across the class. * docs: Add specific paper reference to `_convert_to_karras` docstring. * Refactor `_convert_to_karras` docstring in DPMSolverSDEScheduler to include detailed descriptions and a specific paper reference, enhancing clarity and documentation consistency.
-
- 19 Nov, 2025 1 commit
-
-
Pratim Dasude authored
Community Pipeline: FluxFillControlNetInpaintPipeline for FLUX Fill-Based Inpainting with ControlNet (#12649) * new flux fill controlnet inpaint pipline * Delete src/diffusers/pipelines/flux/pipline_flux_fill_controlnet_Inpaint.py deleting from main flux pipeline * Fluc_fill_controlnet community pipline * Update README.md * Apply style fixes
-