- 23 Oct, 2025 3 commits
-
-
Dhruv Nair authored
* update * update * update
-
Aishwarya Badlani authored
* Fix MPS compatibility in get_1d_sincos_pos_embed_from_grid #12432 * Fix trailing whitespace in docstring * Apply style fixes --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
kaixuanliu authored
* fix CI bug for kandinsky3_img2img case Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> * update code Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> --------- Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com>
-
- 22 Oct, 2025 4 commits
-
-
YiYi Xu authored
add
-
Álvaro Somoza authored
fix
-
Sayak Paul authored
* up * correct wording. * up * up * up
-
David Bertoin authored
* rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * make fix-copies
-
- 21 Oct, 2025 2 commits
-
-
David Bertoin authored
* Add Photon model and pipeline support This commit adds support for the Photon image generation model: - PhotonTransformer2DModel: Core transformer architecture - PhotonPipeline: Text-to-image generation pipeline - Attention processor updates for Photon-specific attention mechanism - Conversion script for loading Photon checkpoints - Documentation and tests * just store the T5Gemma encoder * enhance_vae_properties if vae is provided only * remove autocast for text encoder forwad * BF16 example * conditioned CFG * remove enhance vae and use vae.config directly when possible * move PhotonAttnProcessor2_0 in transformer_photon * remove einops dependency and now inherits from AttentionMixin * unify the structure of the forward block * update doc * update doc * fix T5Gemma loading from hub * fix timestep shift * remove lora support from doc * Rename EmbedND for PhotoEmbedND * remove modulation dataclass * put _attn_forward and _ffn_forward logic in PhotonBlock's forward * renam LastLayer for FinalLayer * remove lora related code * rename vae_spatial_compression_ratio for vae_scale_factor * support prompt_embeds in call * move xattention conditionning out computation out of the denoising loop * add negative prompts * Use _import_structure for lazy loading * make quality + style * add pipeline test + corresponding fixes * utility function that determines the default resolution given the VAE * Refactor PhotonAttention to match Flux pattern * built-in RMSNorm * Revert accidental .gitignore change * parameter names match the standard diffusers conventions * renaming and remove unecessary attributes setting * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * quantization example * added doc to toctree * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * use dispatch_attention_fn for multiple attention backend support * naming changes * make fix copy * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Add PhotonTransformer2DModel to TYPE_CHECKING imports * make fix-copies * Use Tuple instead of tuple Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * restrict the version of transformers Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * change | for Optional * fix nits. * use typing Dict --------- Co-authored-by:
davidb <davidb@worker-10.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
David Briand <david@photoroom.com> Co-authored-by:
davidb <davidb@worker-8.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
Fei Xie authored
Fix: Use incorrect temporary variable key when replacing adapter name in state dict within load_lora_adapter function Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Oct, 2025 2 commits
-
-
Dhruv Nair authored
update
-
dg845 authored
Refactor QwenEmbedRope to only use the LRU cache for RoPE caching
-
- 18 Oct, 2025 1 commit
-
-
Lev Novitskiy authored
* add kandinsky5 transformer pipeline first version --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Charles <charles@huggingface.co>
-
- 17 Oct, 2025 1 commit
-
-
Ali Imran authored
* cleanup of runway model * quality fixes
-
- 15 Oct, 2025 2 commits
-
-
YiYi Xu authored
update Co-authored-by:Aryan <aryan@huggingface.co>
-
Sayak Paul authored
-
- 14 Oct, 2025 1 commit
-
-
Meatfucker authored
Fix missing load_video documentation and load_video import in WanVideoToVideoPipeline example code (#12472) * Update utilities.md Update missing load_video documentation * Update pipeline_wan_video2video.py Fix missing load_video import in example code
-
- 11 Oct, 2025 1 commit
-
-
Steven Liu authored
* fix syntax * fix * style * fix
-
- 10 Oct, 2025 1 commit
-
-
Sayak Paul authored
* up * get ready * fix import * up * up
-
- 08 Oct, 2025 3 commits
-
-
Sayak Paul authored
* revisit the installations in CI. * up * up * up * empty * up * up * up
-
Sayak Paul authored
* up * unguard.
-
Sayak Paul authored
* start * fix * up
-
- 07 Oct, 2025 1 commit
-
-
Sayak Paul authored
-
- 06 Oct, 2025 2 commits
-
-
Charles authored
-
Sayak Paul authored
* make flux ready for mellon * up * Apply suggestions from code review Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com>
-
- 05 Oct, 2025 2 commits
-
-
Sayak Paul authored
* up * up * up * up * up * up * remove saves * move things around a bit. * get ready.
-
Vladimir Mandic authored
* wan fix scale_shift_factor being on cpu * apply device cast to ltx transformer * Apply style fixes --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 02 Oct, 2025 1 commit
-
-
Sayak Paul authored
conditionally import torch distributed stuff.
-
- 30 Sep, 2025 1 commit
-
-
Steven Liu authored
* change syntax * make style
-
- 29 Sep, 2025 3 commits
-
-
YiYi Xu authored
* fix * add mellon node registry * style * update docstring to include more info! * support custom node mellon * HTTPErrpr -> HfHubHTTPErrpr * up * Update src/diffusers/modular_pipelines/qwenimage/node_utils.py
-
Sayak Paul authored
* feat: support aobaseconfig classes. * [docs] AOBaseConfig (#12302) init Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * up * replace with is_torchao_version * up * up --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Akshay Babbar authored
* fix: preserve boolean dtype for attention masks in ChromaPipeline - Convert attention masks to bool and prevent dtype corruption - Fix both positive and negative mask handling in _get_t5_prompt_embeds - Remove float conversion in _prepare_attention_mask method Fixes #12116 * test: add ChromaPipeline attention mask dtype tests * test: add slow ChromaPipeline attention mask tests * chore: removed comments * refactor: removing redundant type conversion * Remove dedicated dtype tests as per feedback --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 26 Sep, 2025 1 commit
-
-
Sayak Paul authored
* start unbloating docstrings (save_lora_weights). * load_lora_weights() * lora_state_dict * fuse_lora * unfuse_lora * load_lora_into_transformer
-
- 25 Sep, 2025 1 commit
-
-
Lucain authored
* Support huggingface_hub 0.x and 1.x * httpx
-
- 24 Sep, 2025 4 commits
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors * update * refactor * refactor; support flash attention 2 with cp * fix * support sage attention with cp * make torch compile compatible * update * refactor * update * refactor * refactor * add ulysses backward * try to make dreambooth script work; accelerator backward not playing well * Revert "try to make dreambooth script work; accelerator backward not playing well" This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449. * workaround compilation problems with triton when doing all-to-all * support wan * handle backward correctly * support qwen * support ltx * make fix-copies * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update docs * add explanation * make fix-copies * add docstrings * support passing parallel_config to from_pretrained * apply review suggestions * make style * update * Update docs/source/en/api/parallel.md Co-authored-by:
Aryan <aryan@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
Alberto Chimenti authored
Fixed WanVACEPipeline to allow prompt to be None and skip encoding step
-
Yao Matrix authored
Signed-off-by:Yao, Matrix <matrix.yao@intel.com>
-
Dhruv Nair authored
* update * update * update
-
- 23 Sep, 2025 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 22 Sep, 2025 2 commits
-
-
SahilCarterr authored
* Fixes chroma docs * fix docs fixed docs are now consistent
-
Sayak Paul authored
* factor out the overlaps in save_lora_weights(). * remove comment. * remove comment. * up * fix-copies
-