- 24 Oct, 2025 2 commits
-
-
YiYi Xu authored
* add hunyuanimage2.1 --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
-
- 23 Oct, 2025 4 commits
-
-
Dhruv Nair authored
* update * update * update
-
Aishwarya Badlani authored
* Fix MPS compatibility in get_1d_sincos_pos_embed_from_grid #12432 * Fix trailing whitespace in docstring * Apply style fixes --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
kaixuanliu authored
* fix CI bug for kandinsky3_img2img case Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> * update code Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com> --------- Signed-off-by:
Liu, Kaixuan <kaixuan.liu@intel.com>
-
Sayak Paul authored
* add a lightweight test suite for attention backends. * up * up * Apply suggestions from code review * formatting
-
- 22 Oct, 2025 5 commits
-
-
Sayak Paul authored
xfail the test_wuerstchen_prior test
-
YiYi Xu authored
add
-
Álvaro Somoza authored
fix
-
Sayak Paul authored
* up * correct wording. * up * up * up
-
David Bertoin authored
* rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * make fix-copies
-
- 21 Oct, 2025 5 commits
-
-
vb authored
* purge HF_HUB_ENABLE_HF_TRANSFER; promote Xet * purge HF_HUB_ENABLE_HF_TRANSFER; promote Xet x2 * restrict docker build test to the ones we actually use in CI. --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
David Bertoin authored
* Add Photon model and pipeline support This commit adds support for the Photon image generation model: - PhotonTransformer2DModel: Core transformer architecture - PhotonPipeline: Text-to-image generation pipeline - Attention processor updates for Photon-specific attention mechanism - Conversion script for loading Photon checkpoints - Documentation and tests * just store the T5Gemma encoder * enhance_vae_properties if vae is provided only * remove autocast for text encoder forwad * BF16 example * conditioned CFG * remove enhance vae and use vae.config directly when possible * move PhotonAttnProcessor2_0 in transformer_photon * remove einops dependency and now inherits from AttentionMixin * unify the structure of the forward block * update doc * update doc * fix T5Gemma loading from hub * fix timestep shift * remove lora support from doc * Rename EmbedND for PhotoEmbedND * remove modulation dataclass * put _attn_forward and _ffn_forward logic in PhotonBlock's forward * renam LastLayer for FinalLayer * remove lora related code * rename vae_spatial_compression_ratio for vae_scale_factor * support prompt_embeds in call * move xattention conditionning out computation out of the denoising loop * add negative prompts * Use _import_structure for lazy loading * make quality + style * add pipeline test + corresponding fixes * utility function that determines the default resolution given the VAE * Refactor PhotonAttention to match Flux pattern * built-in RMSNorm * Revert accidental .gitignore change * parameter names match the standard diffusers conventions * renaming and remove unecessary attributes setting * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * quantization example * added doc to toctree * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * use dispatch_attention_fn for multiple attention backend support * naming changes * make fix copy * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Add PhotonTransformer2DModel to TYPE_CHECKING imports * make fix-copies * Use Tuple instead of tuple Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * restrict the version of transformers Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * change | for Optional * fix nits. * use typing Dict --------- Co-authored-by:
davidb <davidb@worker-10.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
David Briand <david@photoroom.com> Co-authored-by:
davidb <davidb@worker-8.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
Sayak Paul authored
-
Steven Liu authored
* reorganize * fix --------- Co-authored-by:Álvaro Somoza <asomoza@users.noreply.github.com>
-
Fei Xie authored
Fix: Use incorrect temporary variable key when replacing adapter name in state dict within load_lora_adapter function Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Oct, 2025 2 commits
-
-
Dhruv Nair authored
update
-
dg845 authored
Refactor QwenEmbedRope to only use the LRU cache for RoPE caching
-
- 18 Oct, 2025 1 commit
-
-
Lev Novitskiy authored
* add kandinsky5 transformer pipeline first version --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Charles <charles@huggingface.co>
-
- 17 Oct, 2025 3 commits
-
-
Ali Imran authored
* cleanup of runway model * quality fixes
-
Sayak Paul authored
* up * up * up * up * up * u[ * up * up * up
-
Sayak Paul authored
* xfail more incorrect transformer imports. * xfail more. * up * up * up
-
- 16 Oct, 2025 2 commits
-
-
Steven Liu authored
* check links * update * feedback * remove
-
Steven Liu authored
* checks * feedback --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 15 Oct, 2025 4 commits
-
-
YiYi Xu authored
update Co-authored-by:Aryan <aryan@huggingface.co>
-
Sayak Paul authored
fix clapconfig for text backbone in audioldm2
-
Sayak Paul authored
-
Steven Liu authored
fix broken links
-
- 14 Oct, 2025 2 commits
-
-
Steven Liu authored
* init * fix * batch inf * feedback * update
-
Meatfucker authored
Fix missing load_video documentation and load_video import in WanVideoToVideoPipeline example code (#12472) * Update utilities.md Update missing load_video documentation * Update pipeline_wan_video2video.py Fix missing load_video import in example code
-
- 13 Oct, 2025 1 commit
-
-
Manith Ratnayake authored
-
- 11 Oct, 2025 1 commit
-
-
Steven Liu authored
* fix syntax * fix * style * fix
-
- 10 Oct, 2025 1 commit
-
-
Sayak Paul authored
* up * get ready * fix import * up * up
-
- 09 Oct, 2025 1 commit
-
-
Sayak Paul authored
-
- 08 Oct, 2025 4 commits
-
-
Sayak Paul authored
* revisit the installations in CI. * up * up * up * empty * up * up * up
-
Sayak Paul authored
* up * unguard.
-
Sayak Paul authored
* fix dockerfile definitions. * python 3.10 slim. * up * up * up * up * up * revert pr_tests.yml changes * up * up * reduce python version for torch 2.1.0
-
Sayak Paul authored
* start * fix * up
-
- 07 Oct, 2025 2 commits
-
-
Linoy Tsaban authored
* fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled * fix bug when offload and cache_latents both enabled
-
Sayak Paul authored
-