- 01 Dec, 2025 1 commit
-
-
YiYi Xu authored
* add --------- Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 29 Nov, 2025 1 commit
-
-
DefTruth authored
* allow type-check for ZImageTransformer2DModel * make fix-copies
-
- 25 Nov, 2025 2 commits
-
-
Sayak Paul authored
* add vae * Initial commit for Flux 2 Transformer implementation * add pipeline part * small edits to the pipeline and conversion * update conversion script * fix * up up * finish pipeline * Remove Flux IP Adapter logic for now * Remove deprecated 3D id logic * Remove ControlNet logic for now * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block * update pipeline * Don't use biases for input projs and output AdaNorm * up * Remove bias for double stream block text QKV projections * Add script to convert Flux 2 transformer to diffusers * make style and make quality * fix a few things. * allow sft files to go. * fix image processor * fix batch * style a bit * Fix some bugs in Flux 2 transformer implementation * Fix dummy input preparation and fix some test bugs * fix dtype casting in timestep guidance module. * resolve conflicts., * remove ip adapter stuff. * Fix Flux 2 transformer consistency test * Fix bug in Flux2TransformerBlock (double stream block) * Get remaining Flux 2 transformer tests passing * make style; make quality; make fix-copies * remove stuff. * fix type annotaton. * remove unneeded stuff from tests * tests * up * up * add sf support * Remove unused IP Adapter and ControlNet logic from transformer (#9) * copied from * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> * up * up * up * up * up * Refactor Flux2Attention into separate classes for double stream and single stream attention * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion * Address review comments * Update src/diffusers/pipelines/flux2/pipeline_flux2.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * up * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12) * up * support ostris loras. (#13) * up * update schdule * up * up (#17) * add training scripts (#16) * add training scripts Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> * model cpu offload in validation. * add flux.2 readme * add img2img and tests * cpu offload in log validation * Apply suggestions from code review * fix * up * fixes * remove i2i training tests for now. --------- Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by:
linoytsaban <linoy@huggingface.co> * up --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
Daniel Gu <dgu8957@gmail.com> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by:
linoytsaban <linoy@huggingface.co>
-
Jerry Wu authored
* Add Support for Z-Image. * Reformatting with make style, black & isort. * Remove init, Modify import utils, Merge forward in transformers block, Remove once func in pipeline. * modified main model forward, freqs_cis left * refactored to add B dim * fixed stack issue * fixed modulation bug * fixed modulation bug * fix bug * remove value_from_time_aware_config * styling * Fix neg embed and devide / bug; Reuse pad zero tensor; Turn cat -> repeat; Add hint for attn processor. * Replace padding with pad_sequence; Add gradient checkpointing. * Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that. * Fix Docstring and Make Style. * Revert "Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that." This reverts commit fbf26b7ed11d55146103c97740bad4a5f91744e0. * update z-image docstring * Revert attention dispatcher * update z-image docstring * styling * Recover attention_dispatch.py with its origin impl, later would special commit for fa3 compatibility. * Fix prev bug, and support for prompt_embeds pass in args after prompt pre-encode as List of torch Tensor. * Remove einop dependency. * remove redundant imports & make fix-copies * fix import --------- Co-authored-by:liudongyang <liudongyang0114@gmail.com>
-
- 19 Nov, 2025 1 commit
-
-
Sayak Paul authored
* refactor how attention kernels from hub are used. * up * refactor according to Dhruv's ideas. Co-authored-by:
Dhruv Nair <dhruv@huggingface.co> * empty Co-authored-by:
Dhruv Nair <dhruv@huggingface.co> * empty Co-authored-by:
Dhruv Nair <dhruv@huggingface.co> * empty Co-authored-by:
dn6 <dhruv@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv@huggingface.co> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 17 Nov, 2025 1 commit
-
-
Junsong Chen authored
* move sana-video to a new dir and add `SanaImageToVideoPipeline` with no modify; * fix bug and run text/image-to-vidoe success; * make style; quality; fix-copies; * add sana image-to-video pipeline in markdown; * add test case for sana image-to-video; * make style; * add a init file in sana-video test dir; * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana_video/test_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana_video/test_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * minor update; * fix bug and skip fp16 save test; Co-authored-by:
Yuyang Zhao <43061147+HeliosZhao@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * add copied from for `encode_prompt` * Apply style fixes --------- Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
Yuyang Zhao <43061147+HeliosZhao@users.noreply.github.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 13 Nov, 2025 1 commit
-
-
dg845 authored
--------- Co-authored-by:
Tolga Cangöz <mtcangoz@gmail.com> Co-authored-by:
Tolga Cangöz <46008593+tolgacangoz@users.noreply.github.com>
-
- 12 Nov, 2025 3 commits
-
-
Quentin Gallouédec authored
* Update pipeline_skyreels_v2_i2v.py * Update README.md * Update torch_utils.py * Update torch_utils.py * Update guider_utils.py * Update pipeline_ltx.py * Update pipeline_bria.py * Apply suggestion from @qgallouedec * Update autoencoder_kl_qwenimage.py * Update pipeline_prx.py * Update pipeline_wan_vace.py * Update pipeline_skyreels_v2.py * Update pipeline_skyreels_v2_diffusion_forcing.py * Update pipeline_bria_fibo.py * Update pipeline_skyreels_v2_diffusion_forcing_i2v.py * Update pipeline_ltx_condition.py * Update pipeline_ltx_image2video.py * Update regional_prompting_stable_diffusion.py * make style * style * style
-
YiYi Xu authored
* fix * fix
-
a120092009 authored
* Add MLU Support. * fix comment. * rename is_mlu_available to is_torch_mlu_available * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 10 Nov, 2025 2 commits
-
-
YiYi Xu authored
* update, remove intermediaate_inputs * support image2video * revert dynamic steps to simplify * refactor vae encoder block * support flf2video! * add support for wan2.2 14B * style * Apply suggestions from code review * input dynamic step -> additiional input step * up * fix init * update dtype
-
Jay Wu authored
* add ChronoEdit * add ref to original function & remove wan2.2 logics * Update src/diffusers/pipelines/chronoedit/pipeline_chronoedit.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/chronoedit/pipeline_chronoedit.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * add ChronoeEdit test * add docs * add docs * make fix-copies * fix chronoedit test --------- Co-authored-by:
wjay <wjay@nvidia.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 06 Nov, 2025 2 commits
-
-
Junsong Chen authored
* 1. add `SanaVideoTransformer3DModel` in transformer_sana_video.py 2. add `SanaVideoPipeline` in pipeline_sana_video.py 3. add all code we need for import `SanaVideoPipeline` * add a sample about how to use sana-video; * code update; * update hf model path; * update code; * sana-video can run now; * 1. add aspect ratio in sana-video-pipeline; 2. add reshape function in sana-video-processor; 3. fix convert pth to safetensor bugs; * default to use `use_resolution_binning`; * make style; * remove unused code; * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * support `dispatch_attention_fn` * 1. add sana-video markdown; 2. fix typos; * add two test case for sana-video (need check) * fix text-encoder in test-sana-video; * Update tests/pipelines/sana/test_sana_video.py * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/video_processor.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * make style make quality make fix-copies * toctree yaml update; * add sana-video-transformer3d markdown; * Apply style fixes --------- Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Dhruv Nair authored
* update * update * update * update --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 31 Oct, 2025 1 commit
-
-
Dhruv Nair authored
update Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 28 Oct, 2025 1 commit
-
-
galbria authored
* Bria FIBO pipeline * style fixs * fix CR * Refactor BriaFibo classes and update pipeline parameters - Updated BriaFiboAttnProcessor and BriaFiboAttention classes to reflect changes from Flux equivalents. - Modified the _unpack_latents method in BriaFiboPipeline to improve clarity. - Increased the default max_sequence_length to 3000 and added a new optional parameter do_patching. - Cleaned up test_pipeline_bria_fibo.py by removing unused imports and skipping unsupported tests. * edit the docs of FIBO * Remove unused BriaFibo imports and update CPU offload method in BriaFiboPipeline * Refactor FIBO classes to BriaFibo naming convention - Updated class names from FIBO to BriaFibo for consistency across the module. - Modified instances of FIBOEmbedND, FIBOTimesteps, TextProjection, and TimestepProjEmbeddings to reflect the new naming. - Ensured all references in the BriaFiboTransformer2DModel are updated accordingly. * Add BriaFiboTransformer2DModel import to transformers module * Remove unused BriaFibo imports from modular pipelines and add BriaFiboTransformer2DModel and BriaFiboPipeline classes to dummy objects for enhanced compatibility with torch and transformers. * Update BriaFibo classes with copied documentation and fix import typo in pipeline module - Added documentation comments indicating the source of copied code in BriaFiboTransformerBlock and _pack_latents methods. - Corrected the import statement for BriaFiboPipeline in the pipelines module. * Remove unused BriaFibo imports from __init__.py to streamline modular pipelines. * Refactor documentation comments in BriaFibo classes to indicate inspiration from existing implementations - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to reflect that the code is inspired by other modules rather than copied. - Enhanced clarity on the origins of the methods to maintain proper attribution. * change Inspired by to Based on * add reference link and fix trailing whitespace * Add BriaFiboTransformer2DModel documentation and update comments in BriaFibo classes - Introduced a new documentation file for BriaFiboTransformer2DModel. - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to clarify the origins of the code, indicating copied sources for better attribution. --------- Co-authored-by:sayakpaul <spsayakpaul@gmail.com>
-
- 27 Oct, 2025 1 commit
-
-
Mikko Lauri authored
* add aiter attention backend * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 24 Oct, 2025 2 commits
-
-
YiYi Xu authored
* add hunyuanimage2.1 --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
-
- 23 Oct, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update
-
- 22 Oct, 2025 1 commit
-
-
David Bertoin authored
* rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * rename photon to prx * rename photon into prx * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370 * make fix-copies
-
- 21 Oct, 2025 1 commit
-
-
David Bertoin authored
* Add Photon model and pipeline support This commit adds support for the Photon image generation model: - PhotonTransformer2DModel: Core transformer architecture - PhotonPipeline: Text-to-image generation pipeline - Attention processor updates for Photon-specific attention mechanism - Conversion script for loading Photon checkpoints - Documentation and tests * just store the T5Gemma encoder * enhance_vae_properties if vae is provided only * remove autocast for text encoder forwad * BF16 example * conditioned CFG * remove enhance vae and use vae.config directly when possible * move PhotonAttnProcessor2_0 in transformer_photon * remove einops dependency and now inherits from AttentionMixin * unify the structure of the forward block * update doc * update doc * fix T5Gemma loading from hub * fix timestep shift * remove lora support from doc * Rename EmbedND for PhotoEmbedND * remove modulation dataclass * put _attn_forward and _ffn_forward logic in PhotonBlock's forward * renam LastLayer for FinalLayer * remove lora related code * rename vae_spatial_compression_ratio for vae_scale_factor * support prompt_embeds in call * move xattention conditionning out computation out of the denoising loop * add negative prompts * Use _import_structure for lazy loading * make quality + style * add pipeline test + corresponding fixes * utility function that determines the default resolution given the VAE * Refactor PhotonAttention to match Flux pattern * built-in RMSNorm * Revert accidental .gitignore change * parameter names match the standard diffusers conventions * renaming and remove unecessary attributes setting * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * quantization example * added doc to toctree * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * use dispatch_attention_fn for multiple attention backend support * naming changes * make fix copy * Update docs/source/en/api/pipelines/photon.md Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Add PhotonTransformer2DModel to TYPE_CHECKING imports * make fix-copies * Use Tuple instead of tuple Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * restrict the version of transformers Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/photon/test_pipeline_photon.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * change | for Optional * fix nits. * use typing Dict --------- Co-authored-by:
davidb <davidb@worker-10.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
David Briand <david@photoroom.com> Co-authored-by:
davidb <davidb@worker-8.soperator-worker-svc.soperator.svc.cluster.local> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
- 20 Oct, 2025 1 commit
-
-
Dhruv Nair authored
update
-
- 18 Oct, 2025 1 commit
-
-
Lev Novitskiy authored
* add kandinsky5 transformer pipeline first version --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Charles <charles@huggingface.co>
-
- 17 Oct, 2025 1 commit
-
-
Ali Imran authored
* cleanup of runway model * quality fixes
-
- 10 Oct, 2025 1 commit
-
-
Sayak Paul authored
* up * get ready * fix import * up * up
-
- 06 Oct, 2025 1 commit
-
-
Charles authored
-
- 05 Oct, 2025 1 commit
-
-
Sayak Paul authored
* up * up * up * up * up * up * remove saves * move things around a bit. * get ready.
-
- 30 Sep, 2025 1 commit
-
-
Steven Liu authored
* change syntax * make style
-
- 25 Sep, 2025 1 commit
-
-
Lucain authored
* Support huggingface_hub 0.x and 1.x * httpx
-
- 24 Sep, 2025 1 commit
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors * update * refactor * refactor; support flash attention 2 with cp * fix * support sage attention with cp * make torch compile compatible * update * refactor * update * refactor * refactor * add ulysses backward * try to make dreambooth script work; accelerator backward not playing well * Revert "try to make dreambooth script work; accelerator backward not playing well" This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449. * workaround compilation problems with triton when doing all-to-all * support wan * handle backward correctly * support qwen * support ltx * make fix-copies * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update docs * add explanation * make fix-copies * add docstrings * support passing parallel_config to from_pretrained * apply review suggestions * make style * update * Update docs/source/en/api/parallel.md Co-authored-by:
Aryan <aryan@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
- 23 Sep, 2025 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 21 Sep, 2025 1 commit
-
-
naykun authored
* feat: add support of qwenimageeditplus * add copies statement * fix copies statement * remove vl_processor reference
-
- 16 Sep, 2025 1 commit
-
-
Sari Hleihil authored
* Added LucyEditPipeline * add import & stype missing copied from * Fix example doc string --------- Co-authored-by:yiyixuxu <yixu310@gmail.com>
-
- 09 Sep, 2025 1 commit
-
-
Frank (Haofan) Wang authored
* add qwen-image-cn-inpaint --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
- 08 Sep, 2025 1 commit
-
-
YiYi Xu authored
* add qwen modular
-
- 03 Sep, 2025 2 commits
-
-
Ishan Modi authored
* initial commit * update * updates * update * update * update * update * update * update * addressed PR comments * update * addressed PR comments * update * update * update * update * update * update * updates * update * update * addressed PR comments * updates * code formatting * update * addressed PR comments * addressed PR comments * addressed PR comments * addressed PR comments * fix docs and dependencies * fixed dependency test --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* feat: try loading fa3 using kernels when available. * up * change to Hub. * up * up * up * switch env var. * up * up * up * up * up * up
-
- 31 Aug, 2025 1 commit
-
-
Nguyễn Trọng Tuấn authored
* add qwenimage-edit inpaint feature * stay up to date with main branch * fix style * fix docs * copies * fix * again * copies --------- Co-authored-by:
“Trgtuan10” <“tuannguyentrong.402@gmail.com”> Co-authored-by:
TuanNT-ZenAI <tuannt.zenai@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
- 28 Aug, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update
-