"vscode:/vscode.git/clone" did not exist on "3ad49eeeddc5b3a82540bd37ac133650d02ad93d"
- 12 Nov, 2025 1 commit
-
-
YiYi Xu authored
* fix * fix
-
- 10 Nov, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 03 Nov, 2025 1 commit
-
-
Wang, Yi authored
* ulysses enabling in native attention path Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * address review comment Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * add supports_context_parallel for native attention Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * update templated attention Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 27 Oct, 2025 1 commit
-
-
Mikko Lauri authored
* add aiter attention backend * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 02 Oct, 2025 1 commit
-
-
Sayak Paul authored
conditionally import torch distributed stuff.
-
- 24 Sep, 2025 1 commit
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors * update * refactor * refactor; support flash attention 2 with cp * fix * support sage attention with cp * make torch compile compatible * update * refactor * update * refactor * refactor * add ulysses backward * try to make dreambooth script work; accelerator backward not playing well * Revert "try to make dreambooth script work; accelerator backward not playing well" This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449. * workaround compilation problems with triton when doing all-to-all * support wan * handle backward correctly * support qwen * support ltx * make fix-copies * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update docs * add explanation * make fix-copies * add docstrings * support passing parallel_config to from_pretrained * apply review suggestions * make style * update * Update docs/source/en/api/parallel.md Co-authored-by:
Aryan <aryan@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
- 03 Sep, 2025 1 commit
-
-
Sayak Paul authored
* feat: try loading fa3 using kernels when available. * up * change to Hub. * up * up * up * switch env var. * up * up * up * up * up * up
-
- 30 Aug, 2025 1 commit
-
-
Leo Jiang authored
Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 23 Aug, 2025 1 commit
-
-
Aishwarya Badlani authored
* Fix PyTorch 2.3.1 compatibility: add version guard for torch.library.custom_op - Add hasattr() check for torch.library.custom_op and register_fake - These functions were added in PyTorch 2.4, causing import failures in 2.3.1 - Both decorators and functions are now properly guarded with version checks - Maintains backward compatibility while preserving functionality Fixes #12195 * Use dummy decorators approach for PyTorch version compatibility - Replace hasattr check with version string comparison - Add no-op decorator functions for PyTorch < 2.4.0 - Follows pattern from #11941 as suggested by reviewer - Maintains cleaner code structure without indentation changes * Update src/diffusers/models/attention_dispatch.py Update all the decorator usages Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/models/attention_dispatch.py Co-authored-by:
Aryan <contact.aryanvs@gmail.com> * Move version check to top of file and use private naming as requested * Apply style fixes --------- Co-authored-by:
Aryan <contact.aryanvs@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 12 Aug, 2025 1 commit
-
-
Leo Jiang authored
[Bugfix] typo error in npu FA Co-authored-by:
J石页 <jiangshuo9@h-partners.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 22 Jul, 2025 1 commit
-
-
Aryan authored
* update * update * update
-
- 17 Jul, 2025 1 commit
-
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-