"vscode:/vscode.git/clone" did not exist on "0db766ba7708f93b615ab2ccc9d9595c1191a4aa"
- 22 Oct, 2025 1 commit
-
-
David Bertoin authored
* rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * make fix-copies
-
- 21 Oct, 2025 1 commit
-
-
David Bertoin authored
* Add Photon model and pipeline support This commit adds support for the Photon image generation model: - PhotonTransformer2DModel: Core transformer architecture - PhotonPipeline: Text-to-image generation pipeline - Attention processor updates for Photon-specific attention mechanism - Conversion script for loading Photon checkpoints - Documentation and tests * just store the T5Gemma encoder * enhance_vae_properties if vae is provided only * remove autocast for text encoder forwad * BF16 example * conditioned CFG * remove enhance vae and use vae.config directly when possible * move PhotonAttnProcessor2_0 in transformer_photon * remove einops dependency and now inherits from AttentionMixin * unify the structure of the forward block * update doc * update doc * fix T5Gemma loading from hub * fix timestep shift * remove lora support from doc * Rename EmbedND for PhotoEmbedND * remove modulation dataclass * put _attn_forward and _ffn_forward logic in PhotonBlock's forward * renam LastLayer for FinalLayer * remove lora related code * rename vae_spatial_compression_ratio for vae_scale_factor * support prompt_embeds in call * move xattention conditionning out computation out of the denoising loop * add negative prompts * Use _import_structure for lazy loading * make quality + style * add pipeline test + corresponding fixes * utility function that determines the default resolution given the VAE * Refactor PhotonAttention to match Flux pattern * built-in RMSNorm * Revert accidental .gitignore change * parameter names match the standard diffusers conventions * renaming and remove unecessary attributes setting * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * quantization example * added doc to toctree * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * use dispatch_attention_fn for multiple attention backend support * naming changes * make fix copy * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Add PhotonTransformer2DModel to TYPE_CHECKING imports * make fix-copies * Use Tuple instead of tuple Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * restrict the version of transformers Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * change | for Optional * fix nits. * use typing Dict --------- Co-authored-by:
davidb <davidb@worker-10.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
David Briand <david@photoroom.com> Co-authored-by:
davidb <davidb@worker-8.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
- 18 Jun, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * updte * update * update * update
-
- 14 Jun, 2025 1 commit
-
-
Edna authored
* working state from hameerabbasi and iddl * working state form hameerabbasi and iddl (transformer) * working state (normalization) * working state (embeddings) * add chroma loader * add chroma to mappings * add chroma to transformer init * take out variant stuff * get decently far in changing variant stuff * add chroma init * make chroma output class * add chroma transformer to dummy tp * add chroma to init * add chroma to init * fix single file * update * update * add chroma to auto pipeline * add chroma to pipeline init * change to chroma transformer * take out variant from blocks * swap embedder location * remove prompt_2 * work on swapping text encoders * remove mask function * dont modify mask (for now) * wrap attn mask * no attn mask (can't get it to work) * remove pooled prompt embeds * change to my own unpooled embeddeer * fix load * take pooled projections out of transformer * ensure correct dtype for chroma embeddings * update * use dn6 attn mask + fix true_cfg_scale * use chroma pipeline output * use DN6 embeddings * remove guidance * remove guidance embed (pipeline) * remove guidance from embeddings * don't return length * dont change dtype * remove unused stuff, fix up docs * add chroma autodoc * add .md (oops) * initial chroma docs * undo don't change dtype * undo arxiv change unsure why that happened * fix hf papers regression in more places * Update docs/source/en/api/pipelines/chroma.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * do_cfg -> self.do_classifier_free_guidance * Update docs/source/en/api/models/chroma_transformer.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update chroma.md * Move chroma layers into transformer * Remove pruned AdaLayerNorms * Add chroma fast tests * (untested) batch cond and uncond * Add # Copied from for shift * Update # Copied from statements * update norm imports * Revert cond + uncond batching * Add transformer tests * move chroma test (oops) * chroma init * fix chroma pipeline fast tests * Update src/diffusers/models/transformers/transformer_chroma.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Move Approximator and Embeddings * Fix auto pipeline + make style, quality * make style * Apply style fixes * switch to new input ids * fix # Copied from error * remove # Copied from on protected members * try to fix import * fix import * make fix-copes * revert style fix * update chroma transformer params * update chroma transformer approximator init params * update to pad tokens * fix batch inference * Make more pipeline tests work * Make most transformer tests work * fix docs * make style, make quality * skip batch tests * fix test skipping * fix test skipping again * fix for tests * Fix all pipeline test * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * Revert "fix equal size list input" This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c. * update * update * update * update * update --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 14 Oct, 2024 1 commit
-
-
Yuxuan.Zhang authored
* merge 9588 * max_shard_size="5GB" for colab running * conversion script updates; modeling test; refactor transformer * make fix-copies * Update convert_cogview3_to_diffusers.py * initial pipeline draft * make style * fight bugs
🐛 🪳 * add example * add tests; refactor * make style * make fix-copies * add co-author YiYi Xu <yixu310@gmail.com> * remove files * add docs * add co-author Co-Authored-By:YiYi Xu <yixu310@gmail.com> * fight docs * address reviews * make style * make model work * remove qkv fusion * remove qkv fusion tets * address review comments * fix make fix-copies error * remove None and TODO * for FP16(draft) * make style * remove dynamic cfg * remove pooled_projection_dim as a parameter * fix tests --------- Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 31 Jan, 2024 1 commit
-
-
Sayak Paul authored
--------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 07 Nov, 2023 1 commit
-
-
Sayak Paul authored
* fix: import bug * fix * fix * fix import utils for lcm * fix: pixart alpha init * Fix --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Nov, 2023 1 commit
-
-
Sayak Paul authored
* init pixart alpha pipeline * fix: import * script * script * script * add: vae to the pipeline * add: vae_scale_factor * add: checkpoint_path * clean conversion script a bit. * size embeddings. * fix: size embedding * update scrip * support for interpolation of position embedding. * support for conditioning. * .. * .. * .. * final layer * final layer * align if encode_prompt * support for caption embedding * refactor * refactor * refactor * start cross attention * start cross attention * cross_attention_dim * cross * cross * support for resolution and aspect_ratio * support for caption projection * refactor patch embeddings * batch_size * up * commit * commit * commit. * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze * squeeze. * squeeze. * fix final block./ * fix final block./ * fix final block./ * clean * fix: interpolation scale. * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging' * debugging * debugging * debugging * debugging * debugging * debugging * debugging * make --checkpoint_path non-required. * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * remove num_tokens * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * timesteps -> timestep * debug * debug * update conversion script. * update conversion script. * update conversion script. * debug * debug * debug * clean * debug * debug * debug * debug * debug * debug * debug * debug * deug * debug * debug * debug * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * clean * fix * fix * boom * boom * some changes * boom * save * up * remove i * fix more tests * DPMSolverMultistepScheduler * fix * offloading * fix conversion script * fix conversion script * remove print * remove support for negative prompt embeds. * typo. * remove extra kwargs * bring conversion script to where it was * fix * trying mu luck * trying my luck again * again * again * again * clean up * up * up * update example * support for 512 * remove spacing * finalize docs. * test debug * fix: assertion values. * debug * debug * debug * fix: repeat * remove prints. * Apply suggestions from code review * Apply suggestions from code review * Correct more * Apply suggestions from code review * Change all * Clean more * fix more * Fix more * Fix more * Correct more * address patrick's comments. * remove unneeded args * clean up pipeline. * sty;e * make the use of additional conditions better conditioned. * None better * dtype * height and width validation * add a note about size brackets. * fix * spit out slow test outputs. * fix? * fix optional test * fix more * remove unneeded comment * debug --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-