"docs/vscode:/vscode.git/clone" did not exist on "c82a7f9c49613117221fb844c4d04e1f628cbced"
- 21 Oct, 2025 1 commit
-
-
Fei Xie authored
Fix: Use incorrect temporary variable key when replacing adapter name in state dict within load_lora_adapter function Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Oct, 2025 2 commits
-
-
Dhruv Nair authored
update
-
dg845 authored
Refactor QwenEmbedRope to only use the LRU cache for RoPE caching
-
- 18 Oct, 2025 1 commit
-
-
Lev Novitskiy authored
* add kandinsky5 transformer pipeline first version --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Charles <charles@huggingface.co>
-
- 17 Oct, 2025 1 commit
-
-
Ali Imran authored
* cleanup of runway model * quality fixes
-
- 15 Oct, 2025 2 commits
-
-
YiYi Xu authored
update Co-authored-by:Aryan <aryan@huggingface.co>
-
Sayak Paul authored
-
- 14 Oct, 2025 1 commit
-
-
Meatfucker authored
Fix missing load_video documentation and load_video import in WanVideoToVideoPipeline example code (#12472) * Update utilities.md Update missing load_video documentation * Update pipeline_wan_video2video.py Fix missing load_video import in example code
-
- 11 Oct, 2025 1 commit
-
-
Steven Liu authored
* fix syntax * fix * style * fix
-
- 10 Oct, 2025 1 commit
-
-
Sayak Paul authored
* up * get ready * fix import * up * up
-
- 08 Oct, 2025 3 commits
-
-
Sayak Paul authored
* revisit the installations in CI. * up * up * up * empty * up * up * up
-
Sayak Paul authored
* up * unguard.
-
Sayak Paul authored
* start * fix * up
-
- 07 Oct, 2025 1 commit
-
-
Sayak Paul authored
-
- 06 Oct, 2025 2 commits
-
-
Charles authored
-
Sayak Paul authored
* make flux ready for mellon * up * Apply suggestions from code review Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com> --------- Co-authored-by:
Álvaro Somoza <asomoza@users.noreply.github.com>
-
- 05 Oct, 2025 2 commits
-
-
Sayak Paul authored
* up * up * up * up * up * up * remove saves * move things around a bit. * get ready.
-
Vladimir Mandic authored
* wan fix scale_shift_factor being on cpu * apply device cast to ltx transformer * Apply style fixes --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 02 Oct, 2025 1 commit
-
-
Sayak Paul authored
conditionally import torch distributed stuff.
-
- 30 Sep, 2025 1 commit
-
-
Steven Liu authored
* change syntax * make style
-
- 29 Sep, 2025 3 commits
-
-
YiYi Xu authored
* fix * add mellon node registry * style * update docstring to include more info! * support custom node mellon * HTTPErrpr -> HfHubHTTPErrpr * up * Update src/diffusers/modular_pipelines/qwenimage/node_utils.py
-
Sayak Paul authored
* feat: support aobaseconfig classes. * [docs] AOBaseConfig (#12302) init Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * up * replace with is_torchao_version * up * up --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Akshay Babbar authored
* fix: preserve boolean dtype for attention masks in ChromaPipeline - Convert attention masks to bool and prevent dtype corruption - Fix both positive and negative mask handling in _get_t5_prompt_embeds - Remove float conversion in _prepare_attention_mask method Fixes #12116 * test: add ChromaPipeline attention mask dtype tests * test: add slow ChromaPipeline attention mask tests * chore: removed comments * refactor: removing redundant type conversion * Remove dedicated dtype tests as per feedback --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 26 Sep, 2025 1 commit
-
-
Sayak Paul authored
* start unbloating docstrings (save_lora_weights). * load_lora_weights() * lora_state_dict * fuse_lora * unfuse_lora * load_lora_into_transformer
-
- 25 Sep, 2025 1 commit
-
-
Lucain authored
* Support huggingface_hub 0.x and 1.x * httpx
-
- 24 Sep, 2025 4 commits
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors * update * refactor * refactor; support flash attention 2 with cp * fix * support sage attention with cp * make torch compile compatible * update * refactor * update * refactor * refactor * add ulysses backward * try to make dreambooth script work; accelerator backward not playing well * Revert "try to make dreambooth script work; accelerator backward not playing well" This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449. * workaround compilation problems with triton when doing all-to-all * support wan * handle backward correctly * support qwen * support ltx * make fix-copies * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update docs * add explanation * make fix-copies * add docstrings * support passing parallel_config to from_pretrained * apply review suggestions * make style * update * Update docs/source/en/api/parallel.md Co-authored-by:
Aryan <aryan@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
Alberto Chimenti authored
Fixed WanVACEPipeline to allow prompt to be None and skip encoding step
-
Yao Matrix authored
Signed-off-by:Yao, Matrix <matrix.yao@intel.com>
-
Dhruv Nair authored
* update * update * update
-
- 23 Sep, 2025 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 22 Sep, 2025 5 commits
-
-
SahilCarterr authored
* Fixes chroma docs * fix docs fixed docs are now consistent
-
Sayak Paul authored
* factor out the overlaps in save_lora_weights(). * remove comment. * remove comment. * up * fix-copies
-
SahilCarterr authored
* FIxes enable_xformers_memory_efficient_attention() * Update attention.py
-
Chen Mingyi authored
-
Sayak Paul authored
xfail some kandinsky tests.
-
- 21 Sep, 2025 1 commit
-
-
naykun authored
* feat: add support of qwenimageeditplus * add copies statement * fix copies statement * remove vl_processor reference
-
- 20 Sep, 2025 1 commit
-
-
Dhruv Nair authored
update
-
- 18 Sep, 2025 1 commit
-
-
Dave Lage authored
* Convert alphas for embedders for sd-scripts to ai toolkit conversion * Add kohya embedders conversion test * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 17 Sep, 2025 1 commit
-
-
DefTruth authored
* fix hidream type hint * fix hunyuan-video type hint * fix many type hint * fix many type hint errors * fix many type hint errors * fix many type hint errors * make stype & make quality
-
- 16 Sep, 2025 1 commit
-
-
Zijian Zhou authored
* Update autoencoder_kl_wan.py When using the Wan2.2 VAE, the spatial compression ratio calculated here is incorrect. It should be 16 instead of 8. Pass it in directly via the config to ensure it’s correct here. * Update autoencoder_kl_wan.py
-