- 28 Oct, 2025 1 commit
-
-
Lev Novitskiy authored
* add transformer pipeline first version * updates * fix 5sec generation * rewrite Kandinsky5T2VPipeline to diffusers style * add multiprompt support * remove prints in pipeline * add nabla attention * Wrap Transformer in Diffusers style * fix license * fix prompt type * add gradient checkpointing and peft support * add usage example * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> * remove unused imports * add 10 second models support * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove no_grad and simplified prompt paddings * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * moved template to __init__ * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * moved sdps inside processor * remove oneline function * remove reset_dtype methods * Transformer: move all methods to forward * separated prompt encoding * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * refactoring * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * refactoring acording to https://github.com/huggingface/diffusers/commit/acabbc0033d4b4933fc651766a4aa026db2e6dc1 * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * fixed * style +copies * Update src/diffusers/models/transformers/transformer_kandinsky.py Co-authored-by:
Charles <charles@huggingface.co> * more * Apply suggestions from code review * add lora loader doc * add compiled Nabla Attention * all needed changes for 10 sec models are added! * add docs * Apply style fixes * update docs * add kandinsky5 to toctree * add tests * fix tests * Apply style fixes * update tests --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Charles <charles@huggingface.co> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 24 Oct, 2025 2 commits
-
-
kaixuanliu authored
* Loose the criteria tolerance appropriately for Intel XPU devices Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> * change back the atol value Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> * use expectations Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> * Update tests/pipelines/kandinsky2_2/test_kandinsky_controlnet.py --------- Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> Co-authored-by:
Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
-
YiYi Xu authored
* add hunyuanimage2.1 --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 22 Oct, 2025 2 commits
-
-
Sayak Paul authored
xfail the test_wuerstchen_prior test
-
David Bertoin authored
* rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * make fix-copies
-
- 21 Oct, 2025 1 commit
-
-
David Bertoin authored
* Add Photon model and pipeline support This commit adds support for the Photon image generation model: - PhotonTransformer2DModel: Core transformer architecture - PhotonPipeline: Text-to-image generation pipeline - Attention processor updates for Photon-specific attention mechanism - Conversion script for loading Photon checkpoints - Documentation and tests * just store the T5Gemma encoder * enhance_vae_properties if vae is provided only * remove autocast for text encoder forwad * BF16 example * conditioned CFG * remove enhance vae and use vae.config directly when possible * move PhotonAttnProcessor2_0 in transformer_photon * remove einops dependency and now inherits from AttentionMixin * unify the structure of the forward block * update doc * update doc * fix T5Gemma loading from hub * fix timestep shift * remove lora support from doc * Rename EmbedND for PhotoEmbedND * remove modulation dataclass * put _attn_forward and _ffn_forward logic in PhotonBlock's forward * renam LastLayer for FinalLayer * remove lora related code * rename vae_spatial_compression_ratio for vae_scale_factor * support prompt_embeds in call * move xattention conditionning out computation out of the denoising loop * add negative prompts * Use _import_structure for lazy loading * make quality + style * add pipeline test + corresponding fixes * utility function that determines the default resolution given the VAE * Refactor PhotonAttention to match Flux pattern * built-in RMSNorm * Revert accidental .gitignore change * parameter names match the standard diffusers conventions * renaming and remove unecessary attributes setting * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * quantization example * added doc to toctree * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * use dispatch_attention_fn for multiple attention backend support * naming changes * make fix copy * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Add PhotonTransformer2DModel to TYPE_CHECKING imports * make fix-copies * Use Tuple instead of tuple Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * restrict the version of transformers Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * change | for Optional * fix nits. * use typing Dict --------- Co-authored-by:
davidb <davidb@worker-10.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
David Briand <david@photoroom.com> Co-authored-by:
davidb <davidb@worker-8.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
- 17 Oct, 2025 2 commits
-
-
Ali Imran authored
* cleanup of runway model * quality fixes
-
Sayak Paul authored
* xfail more incorrect transformer imports. * xfail more. * up * up * up
-
- 15 Oct, 2025 1 commit
-
-
Sayak Paul authored
fix clapconfig for text backbone in audioldm2
-
- 02 Oct, 2025 1 commit
-
-
Sayak Paul authored
xfail failing tests in CI.
-
- 30 Sep, 2025 1 commit
-
-
Yao Matrix authored
fix xpu ut failures w/ latest pytorch Signed-off-by:Yao, Matrix <matrix.yao@intel.com>
-
- 26 Sep, 2025 1 commit
-
-
Sayak Paul authored
* disable installing transformers from main in ci for now. * up * u[p
-
- 25 Sep, 2025 1 commit
-
-
Lucain authored
* Support huggingface_hub 0.x and 1.x * httpx
-
- 24 Sep, 2025 2 commits
-
-
Yao Matrix authored
Signed-off-by:Yao, Matrix <matrix.yao@intel.com>
-
Sayak Paul authored
disable xformer tests for pipelines it isn't popular.
-
- 22 Sep, 2025 2 commits
-
-
Sayak Paul authored
* up * xfail some tests * up * up
-
Sayak Paul authored
xfail some kandinsky tests.
-
- 15 Sep, 2025 1 commit
-
-
Linoy Tsaban authored
* support Wan2.2-VACE-Fun-A14B * support Wan2.2-VACE-Fun-A14B * support Wan2.2-VACE-Fun-A14B * Apply style fixes * test --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 10 Sep, 2025 1 commit
-
-
Sayak Paul authored
* feat: support group offloading at the pipeline level. * add tests * up * [docs] Pipeline group offloading (#12286) init Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 09 Sep, 2025 1 commit
-
-
kaixuanliu authored
adjust criteria for XPU Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 28 Aug, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * merge main * Revert "merge main" This reverts commit 65efbcead58644b31596ed2d714f7cee0e0238d3.
-
- 26 Aug, 2025 1 commit
-
-
Sayak Paul authored
* start removing flax stuff. * add deprecation warning. * add warning messages. * more warnings. * remove dockerfiles. * remove more. * Update src/diffusers/models/attention_flax.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 25 Aug, 2025 1 commit
-
-
Sadhvi authored
* added test qwen image controlnet * Apply style fixes * added test qwenimage multicontrolnet * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 22 Aug, 2025 1 commit
-
-
Yao Matrix authored
Signed-off-by:YAO Matrix <matrix.yao@intel.com>
-
- 20 Aug, 2025 1 commit
-
-
galbria authored
* Add Bria model and pipeline to diffusers - Introduced `BriaTransformer2DModel` and `BriaPipeline` for enhanced image generation capabilities. - Updated import structures across various modules to include the new Bria components. - Added utility functions and output classes specific to the Bria pipeline. - Implemented tests for the Bria pipeline to ensure functionality and output integrity. * with working tests * style and quality pass * adding docs * add to overview * fixes from "make fix-copies" * Refactor transformer_bria.py and pipeline_bria.py: Introduce new EmbedND class for rotary position embedding, and enhance Timestep and TimestepProjEmbeddings classes. Add utility functions for handling negative prompts and generating original sigmas in pipeline_bria.py. * remove redundent and duplicates tests and fix bf16 slow test * style fixes * small doc update * Enhance Bria 3.2 documentation and implementation - Updated the GitHub repository link for Bria 3.2. - Added usage instructions for the gated model access. - Introduced the BriaTransformerBlock and BriaAttention classes to the model architecture. - Refactored existing classes to integrate Bria-specific components, including BriaEmbedND and BriaPipeline. - Updated the pipeline output class to reflect Bria-specific functionality. - Adjusted test cases to align with the new Bria model structure. * Refactor Bria model components and update documentation - Removed outdated inference example from Bria 3.2 documentation. - Introduced the BriaTransformerBlock class to enhance model architecture. - Updated attention handling to use `attention_kwargs` instead of `joint_attention_kwargs`. - Improved import structure in the Bria pipeline to handle optional dependencies. - Adjusted test cases to reflect changes in model dtype assertions. * Update Bria model reference in documentation to reflect new file naming convention * Update docs/source/en/_toctree.yml * Refactor BriaPipeline to inherit from DiffusionPipeline instead of FluxPipeline, updating imports accordingly. * move the __call__ func to the end of file * Update BriaPipeline example to use bfloat16 for precision sensitivity for better result * make style && make quality && make fix-copiessource --------- Co-authored-by:
Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com> Co-authored-by:
Aryan <contact.aryanvs@gmail.com>
-
- 18 Aug, 2025 1 commit
-
-
Sayak Paul authored
* add docs. * more docs. * xfail full compilation for Qwen for now. * tests * up * up * up * reviewer feedback.
-
- 14 Aug, 2025 1 commit
-
-
Sayak Paul authored
* feat: cuda device_map for pipelines. * up * up * empty * up
-
- 13 Aug, 2025 1 commit
-
-
Nguyễn Trọng Tuấn authored
* feat/qwenimage-img2img-inpaint * Update qwenimage.md to reflect new pipelines and add # Copied from convention * tiny fix for passing ruff check * reformat code * fix copied from statement * fix copied from statement * copy and style fix * fix dummies --------- Co-authored-by:
TuanNT-ZenAI <tuannt.zenai@gmail.com> Co-authored-by:
DN6 <dhruv.nair@gmail.com>
-
- 11 Aug, 2025 1 commit
-
-
Aryan authored
* update * nuke LoC for inference slices
-
- 08 Aug, 2025 1 commit
-
-
YiYi Xu authored
* rearrage the params to groups: default params /image params /batch params / callback params * make style * add names property to pipeline blocks * style * remove more unused func * prepare_latents_inpaint always return noise and image_latents * up * up * update * update * update * update * update * update * update * update --------- Co-authored-by:DN6 <dhruv.nair@gmail.com>
-
- 05 Aug, 2025 2 commits
-
-
Sayak Paul authored
up
-
Aryan authored
update
-
- 04 Aug, 2025 2 commits
-
-
Aryan authored
* update * update * update * add docs
-
YiYi Xu authored
* up --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 28 Jul, 2025 1 commit
-
-
YiYi Xu authored
* support wan 2.2 i2v * add t2v + vae2.2 * add conversion script for vae 2.2 * add * add 5b t2v * conversion script * refactor out reearrange * remove a copied from in skyreels * Apply suggestions from code review Co-authored-by:
bagheera <59658056+bghira@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_wan.py * fix fast tests * style --------- Co-authored-by:
bagheera <59658056+bghira@users.noreply.github.com>
-
- 23 Jul, 2025 1 commit
-
-
Aryan authored
* update * fix wan vace test slice * test * fix
-
- 21 Jul, 2025 3 commits
- 17 Jul, 2025 1 commit
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-