"vscode:/vscode.git/clone" did not exist on "2d3d5a7bb7340963383afd5b4e9a0b53e1238c35"
  1. 08 Dec, 2025 3 commits
  2. 06 Dec, 2025 1 commit
    • Tran Thanh Luan's avatar
      [Feat] TaylorSeer Cache (#12648) · 6290fdfd
      Tran Thanh Luan authored
      
      
      * init taylor_seer cache
      
      * make compatible with any tuple size returned
      
      * use logger for printing, add warmup feature
      
      * still update in warmup steps
      
      * refractor, add docs
      
      * add configurable cache, skip compute module
      
      * allow special cache ids only
      
      * add stop_predicts (cooldown)
      
      * update docs
      
      * apply ruff
      
      * update to handle multple calls per timestep
      
      * refractor to use state manager
      
      * fix format & doc
      
      * chores: naming, remove redundancy
      
      * add docs
      
      * quality & style
      
      * fix taylor precision
      
      * Apply style fixes
      
      * add tests
      
      * Apply style fixes
      
      * Remove TaylorSeerCacheTesterMixin from flux2 tests
      
      * rename identifiers, use more expressive taylor predict loop
      
      * torch compile compatible
      
      * Apply style fixes
      
      * Update src/diffusers/hooks/taylorseer_cache.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * update docs
      
      * make fix-copies
      
      * fix example usage.
      
      * remove tests on flux kontext
      
      ---------
      Co-authored-by: default avatartoilaluan <toilaluan@github.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      6290fdfd
  3. 05 Dec, 2025 5 commits
  4. 04 Dec, 2025 6 commits
  5. 03 Dec, 2025 8 commits
  6. 02 Dec, 2025 3 commits
    • Jerry Wu's avatar
      Fix TPU (torch_xla) compatibility Error about tensor repeat func along with empty dim. (#12770) · 9379b239
      Jerry Wu authored
      
      
      * Refactor image padding logic to pervent zero tensor in transformer_z_image.py
      
      * Apply style fixes
      
      * Add more support to fix repeat bug on tpu devices.
      
      * Fix for dynamo compile error for multi if-branches.
      
      ---------
      Co-authored-by: default avatarMingjia Li <mingjiali@tju.edu.cn>
      Co-authored-by: default avatarMingjia Li <mail@mingjia.li>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      9379b239
    • Guo-Hua Wang's avatar
      Add support for Ovis-Image (#12740) · 4f136f84
      Guo-Hua Wang authored
      
      
      * add ovis_image
      
      * fix code quality
      
      * optimize pipeline_ovis_image.py according to the feedbacks
      
      * optimize imports
      
      * add docs
      
      * make style
      
      * make style
      
      * add ovis to toctree
      
      * oops
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      4f136f84
    • CalamitousFelicitousness's avatar
      Add ZImage LoRA support and integrate into ZImagePipeline (#12750) · edf36f51
      CalamitousFelicitousness authored
      
      
      * Add ZImage LoRA support and integrate into ZImagePipeline
      
      * Add LoRA test for Z-Image
      
      * Move the LoRA test
      
      * Fix ZImage LoRA scale support and test configuration
      
      * Add ZImage LoRA test overrides for architecture differences
      
      - Override test_lora_fuse_nan to use ZImage's 'layers' attribute
        instead of 'transformer_blocks'
      - Skip block-level LoRA scaling test (not supported in ZImage)
      - Add required imports: numpy, torch_device, check_if_lora_correctly_set
      
      * Add ZImageLoraLoaderMixin to LoRA documentation
      
      * Use conditional import for peft.LoraConfig in ZImage tests
      
      * Override test_correct_lora_configs_with_different_ranks for ZImage
      
      ZImage uses 'attention.to_k' naming convention instead of 'attn.to_k',
      so the base test's module name search loop never finds a match. This
      override uses the correct naming pattern for ZImage architecture.
      
      * Add is_flaky decorator to ZImage LoRA tests initialise padding tokens
      
      * Skip ZImage LoRA test class entirely
      
      Skip the entire ZImageLoRATests class due to non-deterministic behavior
      from complex64 RoPE operations and torch.empty padding tokens.
      LoRA functionality works correctly with real models.
      
      Clean up removed:
      - Individual @unittest.skip decorators
      - @is_flaky decorator overrides for inherited methods
      - Custom test method overrides
      - Global torch deterministic settings
      - Unused imports (numpy, is_flaky, check_if_lora_correctly_set)
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      edf36f51
  7. 01 Dec, 2025 8 commits
  8. 29 Nov, 2025 1 commit
  9. 28 Nov, 2025 2 commits
  10. 27 Nov, 2025 2 commits
  11. 26 Nov, 2025 1 commit
    • Jerry Wu's avatar
      Support unittest for Z-image ️ (#12715) · e6d46123
      Jerry Wu authored
      
      
      * Add Support for Z-Image.
      
      * Reformatting with make style, black & isort.
      
      * Remove init, Modify import utils, Merge forward in transformers block, Remove once func in pipeline.
      
      * modified main model forward, freqs_cis left
      
      * refactored to add B dim
      
      * fixed stack issue
      
      * fixed modulation bug
      
      * fixed modulation bug
      
      * fix bug
      
      * remove value_from_time_aware_config
      
      * styling
      
      * Fix neg embed and devide / bug; Reuse pad zero tensor; Turn cat -> repeat; Add hint for attn processor.
      
      * Replace padding with pad_sequence; Add gradient checkpointing.
      
      * Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that.
      
      * Fix Docstring and Make Style.
      
      * Revert "Fix flash_attn3 in dispatch attn backend by _flash_attn_forward, replace its origin implement; Add DocString in pipeline for that."
      
      This reverts commit fbf26b7ed11d55146103c97740bad4a5f91744e0.
      
      * update z-image docstring
      
      * Revert attention dispatcher
      
      * update z-image docstring
      
      * styling
      
      * Recover attention_dispatch.py with its origin impl, later would special commit for fa3 compatibility.
      
      * Fix prev bug, and support for prompt_embeds pass in args after prompt pre-encode as List of torch Tensor.
      
      * Remove einop dependency.
      
      * remove redundant imports & make fix-copies
      
      * fix import
      
      * Support for num_images_per_prompt>1; Remove redundant unquote variables.
      
      * Fix bugs for num_images_per_prompt with actual batch.
      
      * Add unit tests for Z-Image.
      
      * Refine unitest and skip for cases needed separate test env; Fix compatibility with unitest in model, mostly precision formating.
      
      * Add clean env for test_save_load_float16 separ test; Add Note; Styling.
      
      * Update dtype mentioned by yiyi.
      
      ---------
      Co-authored-by: default avatarliudongyang <liudongyang0114@gmail.com>
      e6d46123