- 09 Jan, 2026 1 commit
-
-
liumg authored
-
- 08 Dec, 2025 1 commit
-
-
YiYi Xu authored
* support step-distilled * style
-
- 06 Dec, 2025 1 commit
-
-
Tran Thanh Luan authored
* init taylor_seer cache * make compatible with any tuple size returned * use logger for printing, add warmup feature * still update in warmup steps * refractor, add docs * add configurable cache, skip compute module * allow special cache ids only * add stop_predicts (cooldown) * update docs * apply ruff * update to handle multple calls per timestep * refractor to use state manager * fix format & doc * chores: naming, remove redundancy * add docs * quality & style * fix taylor precision * Apply style fixes * add tests * Apply style fixes * Remove TaylorSeerCacheTesterMixin from flux2 tests * rename identifiers, use more expressive taylor predict loop * torch compile compatible * Apply style fixes * Update src/diffusers/hooks/taylorseer_cache.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * update docs * make fix-copies * fix example usage. * remove tests on flux kontext --------- Co-authored-by:
toilaluan <toilaluan@github.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 05 Dec, 2025 1 commit
-
-
swappy authored
* fix: group offloading to support standalone computational layers in block-level offloading * test: for models with standalone and deeply nested layers in block-level offloading * feat: support for block-level offloading in group offloading config * fix: group offload block modules to AutoencoderKL and AutoencoderKLWan * fix: update group offloading tests to use AutoencoderKL and adjust input dimensions * refactor: streamline block offloading logic * Apply style fixes * update tests * update * fix for failing tests * clean up * revert to use skip_keys * clean up --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 04 Dec, 2025 3 commits
-
-
David Bertoin authored
fix timestepembeddings downscale_freq_shift to be consitant with Photoroom's original code
-
Jiang authored
fix spatial compression ratio compute error for AutoEncoderKLWan Co-authored-by:lirui.926 <lirui.926@bytedance.com>
-
hlky authored
* Z-Image-Turbo `from_single_file` * compute_dtype * -device cast
-
- 03 Dec, 2025 3 commits
-
-
Sayak Paul authored
* start zimage model tests. * up * up * up * up * up * up * up * up * up * up * up * up * Revert "up" This reverts commit bca3e27c96b942db49ccab8ddf824e7a54d43ed1. * expand upon compilation failure reason. * Update tests/models/transformers/test_models_transformer_z_image.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * reinitialize the padding tokens to ones to prevent NaN problems. * updates * up * skipping ZImage DiT tests * up * up --------- Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com>
-
Sayak Paul authored
* remove attn_processors property * more * up * up more. * up * add AttentionMixin to AuraFlow. * up * up * up * up
-
Sayak Paul authored
* start varlen variants for attn backend kernels. * maybe unflatten heads. * updates * remove unused function. * doc * up
-
- 02 Dec, 2025 2 commits
-
-
Jerry Wu authored
* Refactor image padding logic to pervent zero tensor in transformer_z_image.py * Apply style fixes * Add more support to fix repeat bug on tpu devices. * Fix for dynamo compile error for multi if-branches. --------- Co-authored-by:
Mingjia Li <mingjiali@tju.edu.cn> Co-authored-by:
Mingjia Li <mail@mingjia.li> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Guo-Hua Wang authored
* add ovis_image * fix code quality * optimize pipeline_ovis_image.py according to the feedbacks * optimize imports * add docs * make style * make style * add ovis to toctree * oops --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 01 Dec, 2025 2 commits
-
-
DefTruth authored
-
YiYi Xu authored
* add --------- Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-161-123.ec2.internal> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 27 Nov, 2025 1 commit
-
-
Sayak Paul authored
up
-
- 26 Nov, 2025 1 commit
-
-
Jerry Wu authored
* Add Support for Z-Image. * Reformatting with make style, black & isort. * Remove init, Modify import utils, Merge forward in transformers block, Remove once func in pipeline. * modified main model forward, freqs_cis left * refactored to add B dim * fixed stack issue * fixed modulation bug * fixed modulation bug * fix bug * remove value_from_time_aware_config * styling * Fix neg embed and devide / bug; Reuse pad zero tensor; Turn cat -> repeat; Add hint for attn processor. * Replace padding with pad_sequence; Add gradient checkpointing. * Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that. * Fix Docstring and Make Style. * Revert "Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that." This reverts commit fbf26b7ed11d55146103c97740bad4a5f91744e0. * update z-image docstring * Revert attention dispatcher * update z-image docstring * styling * Recover attention_dispatch.py with its origin impl, later would special commit for fa3 compatibility. * Fix prev bug, and support for prompt_embeds pass in args after prompt pre-encode as List of torch Tensor. * Remove einop dependency. * remove redundant imports & make fix-copies * fix import * Support for num_images_per_prompt>1; Remove redundant unquote variables. * Fix bugs for num_images_per_prompt with actual batch. * Add unit tests for Z-Image. * Refine unitest and skip for cases needed separate test env; Fix compatibility with unitest in model, mostly precision formating. * Add clean env for test_save_load_float16 separ test; Add Note; Styling. * Update dtype mentioned by yiyi. --------- Co-authored-by:liudongyang <liudongyang0114@gmail.com>
-
- 25 Nov, 2025 2 commits
-
-
Sayak Paul authored
* add vae * Initial commit for Flux 2 Transformer implementation * add pipeline part * small edits to the pipeline and conversion * update conversion script * fix * up up * finish pipeline * Remove Flux IP Adapter logic for now * Remove deprecated 3D id logic * Remove ControlNet logic for now * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block * update pipeline * Don't use biases for input projs and output AdaNorm * up * Remove bias for double stream block text QKV projections * Add script to convert Flux 2 transformer to diffusers * make style and make quality * fix a few things. * allow sft files to go. * fix image processor * fix batch * style a bit * Fix some bugs in Flux 2 transformer implementation * Fix dummy input preparation and fix some test bugs * fix dtype casting in timestep guidance module. * resolve conflicts., * remove ip adapter stuff. * Fix Flux 2 transformer consistency test * Fix bug in Flux2TransformerBlock (double stream block) * Get remaining Flux 2 transformer tests passing * make style; make quality; make fix-copies * remove stuff. * fix type annotaton. * remove unneeded stuff from tests * tests * up * up * add sf support * Remove unused IP Adapter and ControlNet logic from transformer (#9) * copied from * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> * up * up * up * up * up * Refactor Flux2Attention into separate classes for double stream and single stream attention * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion * Address review comments * Update src/diffusers/pipelines/flux2/pipeline_flux2.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * up * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12) * up * support ostris loras. (#13) * up * update schdule * up * up (#17) * add training scripts (#16) * add training scripts Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> * model cpu offload in validation. * add flux.2 readme * add img2img and tests * cpu offload in log validation * Apply suggestions from code review * fix * up * fixes * remove i2i training tests for now. --------- Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by:
linoytsaban <linoy@huggingface.co> * up --------- Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
Daniel Gu <dgu8957@gmail.com> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal> Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
apolinário <joaopaulo.passos@gmail.com> Co-authored-by:
yiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal> Co-authored-by:
Linoy Tsaban <linoytsaban@gmail.com> Co-authored-by:
linoytsaban <linoy@huggingface.co>
-
Jerry Wu authored
* Add Support for Z-Image. * Reformatting with make style, black & isort. * Remove init, Modify import utils, Merge forward in transformers block, Remove once func in pipeline. * modified main model forward, freqs_cis left * refactored to add B dim * fixed stack issue * fixed modulation bug * fixed modulation bug * fix bug * remove value_from_time_aware_config * styling * Fix neg embed and devide / bug; Reuse pad zero tensor; Turn cat -> repeat; Add hint for attn processor. * Replace padding with pad_sequence; Add gradient checkpointing. * Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that. * Fix Docstring and Make Style. * Revert "Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that." This reverts commit fbf26b7ed11d55146103c97740bad4a5f91744e0. * update z-image docstring * Revert attention dispatcher * update z-image docstring * styling * Recover attention_dispatch.py with its origin impl, later would special commit for fa3 compatibility. * Fix prev bug, and support for prompt_embeds pass in args after prompt pre-encode as List of torch Tensor. * Remove einop dependency. * remove redundant imports & make fix-copies * fix import --------- Co-authored-by:liudongyang <liudongyang0114@gmail.com>
-
- 24 Nov, 2025 2 commits
-
-
Sayak Paul authored
* up * support automatic dispatch. * disable compile support for now./ * up * flash too. * document. * up * up * up * up
-
DefTruth authored
* bugfix: fix chrono-edit context parallel * bugfix: fix chrono-edit context parallel * Update src/diffusers/models/transformers/transformer_chronoedit.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update src/diffusers/models/transformers/transformer_chronoedit.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Clean up comments in transformer_chronoedit.py Removed unnecessary comments regarding parallelization in cross-attention. * fix style * fix qc --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 19 Nov, 2025 1 commit
-
-
Sayak Paul authored
* refactor how attention kernels from hub are used. * up * refactor according to Dhruv's ideas. Co-authored-by:
Dhruv Nair <dhruv@huggingface.co> * empty Co-authored-by:
Dhruv Nair <dhruv@huggingface.co> * empty Co-authored-by:
Dhruv Nair <dhruv@huggingface.co> * empty Co-authored-by:
dn6 <dhruv@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv@huggingface.co> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 17 Nov, 2025 2 commits
-
-
dg845 authored
Revert dim_mult back to list and fix type annotation
-
Junsong Chen authored
* move sana-video to a new dir and add `SanaImageToVideoPipeline` with no modify; * fix bug and run text/image-to-vidoe success; * make style; quality; fix-copies; * add sana image-to-video pipeline in markdown; * add test case for sana image-to-video; * make style; * add a init file in sana-video test dir; * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana_video/test_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana_video/test_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * minor update; * fix bug and skip fp16 save test; Co-authored-by:
Yuyang Zhao <43061147+HeliosZhao@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana_video/pipeline_sana_video_i2v.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * add copied from for `encode_prompt` * Apply style fixes --------- Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
Yuyang Zhao <43061147+HeliosZhao@users.noreply.github.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 15 Nov, 2025 1 commit
-
-
David Bertoin authored
rope in float32
-
- 13 Nov, 2025 1 commit
-
-
dg845 authored
--------- Co-authored-by:
Tolga Cangöz <mtcangoz@gmail.com> Co-authored-by:
Tolga Cangöz <46008593+tolgacangoz@users.noreply.github.com>
-
- 12 Nov, 2025 3 commits
-
-
Quentin Gallouédec authored
* Update pipeline_skyreels_v2_i2v.py * Update README.md * Update torch_utils.py * Update torch_utils.py * Update guider_utils.py * Update pipeline_ltx.py * Update pipeline_bria.py * Apply suggestion from @qgallouedec * Update autoencoder_kl_qwenimage.py * Update pipeline_prx.py * Update pipeline_wan_vace.py * Update pipeline_skyreels_v2.py * Update pipeline_skyreels_v2_diffusion_forcing.py * Update pipeline_bria_fibo.py * Update pipeline_skyreels_v2_diffusion_forcing_i2v.py * Update pipeline_ltx_condition.py * Update pipeline_ltx_image2video.py * Update regional_prompting_stable_diffusion.py * make style * style * style
-
YiYi Xu authored
* fix * fix
-
YiYi Xu authored
* fix * remoce cocpies instead
-
- 11 Nov, 2025 1 commit
-
-
Charchit Sharma authored
* Fix rotary positional embedding dimension mismatch in Wan and SkyReels V2 transformers - Store t_dim, h_dim, w_dim as instance variables in WanRotaryPosEmbed and SkyReelsV2RotaryPosEmbed __init__ - Use stored dimensions in forward() instead of recalculating with different formula - Fixes inconsistency between init (using // 6) and forward (using // 3) - Ensures split_sizes matches the dimensions used to create rotary embeddings * quality fix --------- Co-authored-by:Charchit Sharma <charchitsharma@A-267.local>
-
- 10 Nov, 2025 3 commits
-
-
Cesaryuan authored
Fix: update type hints for Tuple parameters across multiple files to support variable-length tuples (#12544) * Fix: update type hints for Tuple parameters across multiple files to support variable-length tuples * Apply style fixes --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Jay Wu authored
* add ChronoEdit * add ref to original function & remove wan2.2 logics * Update src/diffusers/pipelines/chronoedit/pipeline_chronoedit.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/chronoedit/pipeline_chronoedit.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * add ChronoeEdit test * add docs * add docs * make fix-copies * fix chronoedit test --------- Co-authored-by:
wjay <wjay@nvidia.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 07 Nov, 2025 2 commits
-
-
Wang, Yi authored
* fix the crash in Wan-AI/Wan2.2-TI2V-5B-Diffusers if CP is enabled Signed-off-by:
Wang, Yi <yi.a.wang@intel.com> * address review comment Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * refine Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by:
Wang, Yi <yi.a.wang@intel.com> Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com>
-
DefTruth authored
* feat: enable attention dispatch for huanyuan video * feat: enable attention dispatch for huanyuan video
-
- 06 Nov, 2025 1 commit
-
-
Junsong Chen authored
* 1. add `SanaVideoTransformer3DModel` in transformer_sana_video.py 2. add `SanaVideoPipeline` in pipeline_sana_video.py 3. add all code we need for import `SanaVideoPipeline` * add a sample about how to use sana-video; * code update; * update hf model path; * update code; * sana-video can run now; * 1. add aspect ratio in sana-video-pipeline; 2. add reshape function in sana-video-processor; 3. fix convert pth to safetensor bugs; * default to use `use_resolution_binning`; * make style; * remove unused code; * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/models/transformers/transformer_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * support `dispatch_attention_fn` * 1. add sana-video markdown; 2. fix typos; * add two test case for sana-video (need check) * fix text-encoder in test-sana-video; * Update tests/pipelines/sana/test_sana_video.py * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update tests/pipelines/sana/test_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/pipelines/sana/pipeline_sana_video.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * Update src/diffusers/video_processor.py Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> * make style make quality make fix-copies * toctree yaml update; * add sana-video-transformer3d markdown; * Apply style fixes --------- Co-authored-by:
dg845 <58458699+dg845@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 03 Nov, 2025 1 commit
-
-
Wang, Yi authored
* ulysses enabling in native attention path Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * address review comment Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * add supports_context_parallel for native attention Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * update templated attention Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Nov, 2025 1 commit
-
-
Dhruv Nair authored
update
-
- 30 Oct, 2025 1 commit
-
-
Pavle Padjin authored
* Changing the way we infer dtype to avoid force evaluation of lazy tensors * changing way to infer dtype to ensure type consistency * more robust infering of dtype * removing the upscale dtype entirely
-
- 28 Oct, 2025 2 commits
-
-
galbria authored
* Bria FIBO pipeline * style fixs * fix CR * Refactor BriaFibo classes and update pipeline parameters - Updated BriaFiboAttnProcessor and BriaFiboAttention classes to reflect changes from Flux equivalents. - Modified the _unpack_latents method in BriaFiboPipeline to improve clarity. - Increased the default max_sequence_length to 3000 and added a new optional parameter do_patching. - Cleaned up test_pipeline_bria_fibo.py by removing unused imports and skipping unsupported tests. * edit the docs of FIBO * Remove unused BriaFibo imports and update CPU offload method in BriaFiboPipeline * Refactor FIBO classes to BriaFibo naming convention - Updated class names from FIBO to BriaFibo for consistency across the module. - Modified instances of FIBOEmbedND, FIBOTimesteps, TextProjection, and TimestepProjEmbeddings to reflect the new naming. - Ensured all references in the BriaFiboTransformer2DModel are updated accordingly. * Add BriaFiboTransformer2DModel import to transformers module * Remove unused BriaFibo imports from modular pipelines and add BriaFiboTransformer2DModel and BriaFiboPipeline classes to dummy objects for enhanced compatibility with torch and transformers. * Update BriaFibo classes with copied documentation and fix import typo in pipeline module - Added documentation comments indicating the source of copied code in BriaFiboTransformerBlock and _pack_latents methods. - Corrected the import statement for BriaFiboPipeline in the pipelines module. * Remove unused BriaFibo imports from __init__.py to streamline modular pipelines. * Refactor documentation comments in BriaFibo classes to indicate inspiration from existing implementations - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to reflect that the code is inspired by other modules rather than copied. - Enhanced clarity on the origins of the methods to maintain proper attribution. * change Inspired by to Based on * add reference link and fix trailing whitespace * Add BriaFiboTransformer2DModel documentation and update comments in BriaFibo classes - Introduced a new documentation file for BriaFiboTransformer2DModel. - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to clarify the origins of the code, indicating copied sources for better attribution. --------- Co-authored-by:sayakpaul <spsayakpaul@gmail.com>
-
Wang, Yi authored
* fix crash in tiling mode is enabled Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * fmt Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-