- 03 Oct, 2025 1 commit
-
-
Linoy Tsaban authored
* make qwen and kontext uv compatible * add torchvision * add torchvision * add datasets, bitsandbytes, prodigyopt --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Oct, 2025 3 commits
-
-
Benjamin Bossan authored
I noticed that the test should be for the option check_compiled="ignore" but it was using check_compiled="warn". This has been fixed, now the correct argument is passed. However, the fact that the test passed means that it was incorrect to begin with. The way that logs are collected does not collect the logger.warning call here (not sure why). To amend this, I'm now using assertNoLogs. With this change, the test correctly fails when the wrong argument is passed.
-
Sayak Paul authored
conditionally import torch distributed stuff.
-
Sayak Paul authored
xfail failing tests in CI.
-
- 01 Oct, 2025 1 commit
-
-
Sayak Paul authored
* cache non lora pipeline outputs. * up * up * up * up * Revert "up" This reverts commit 772c32e43397f25919c29bbbe8ef9dc7d581cfb8. * up * Revert "up" This reverts commit cca03df7fce55550ed28b59cadec12d1db188283. * up * up * add . * up * up * up * up * up * up
-
- 30 Sep, 2025 5 commits
-
-
Steven Liu authored
* change syntax * make style
-
Steven Liu authored
* init * feedback * feedback * feedback * feedback * feedback * feedback
-
Lucain authored
* Allow prerelease when installing transformers from main * maybe better * maybe better * and now? * just bored * should be better * works now
-
Yao Matrix authored
fix xpu ut failures w/ latest pytorch Signed-off-by:Yao, Matrix <matrix.yao@intel.com>
-
Dhruv Nair authored
* update * update * update * update * update
-
- 29 Sep, 2025 6 commits
-
-
YiYi Xu authored
* fix * add mellon node registry * style * update docstring to include more info! * support custom node mellon * HTTPErrpr -> HfHubHTTPErrpr * up * Update src/diffusers/modular_pipelines/qwenimage/node_utils.py
-
Steven Liu authored
* init * config * lora metadata * feedback * fix * cache allocator warmup for from_single_file * feedback * feedback
-
Steven Liu authored
* init * feedback --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* feat: support aobaseconfig classes. * [docs] AOBaseConfig (#12302) init Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * up * replace with is_torchao_version * up * up --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Akshay Babbar authored
* fix: preserve boolean dtype for attention masks in ChromaPipeline - Convert attention masks to bool and prevent dtype corruption - Fix both positive and negative mask handling in _get_t5_prompt_embeds - Remove float conversion in _prepare_attention_mask method Fixes #12116 * test: add ChromaPipeline attention mask dtype tests * test: add slow ChromaPipeline attention mask tests * chore: removed comments * refactor: removing redundant type conversion * Remove dedicated dtype tests as per feedback --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
Sayak Paul authored
u[
-
- 26 Sep, 2025 3 commits
-
-
Sayak Paul authored
* start unbloating docstrings (save_lora_weights). * load_lora_weights() * lora_state_dict * fuse_lora * unfuse_lora * load_lora_into_transformer
-
Sayak Paul authored
* slight edits to the attention backends docs. * Update docs/source/en/optimization/attention_backends.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Sayak Paul authored
* disable installing transformers from main in ci for now. * up * u[p
-
- 25 Sep, 2025 1 commit
-
-
Lucain authored
* Support huggingface_hub 0.x and 1.x * httpx
-
- 24 Sep, 2025 8 commits
-
-
DefTruth authored
* docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * misc: update examples link * misc: update examples link * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * docs: introduce cache-dit to diffusers * Refine documentation for CacheDiT features Updated the wording for clarity and consistency in the documentation. Adjusted sections on cache acceleration, automatic block adapter, patch functor, and hybrid cache configuration.
-
Aryan authored
* update * update * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * improve test * handle ip adapter params correctly * fix chroma qkv fusion test * fix fastercache implementation * fix more tests * fight more tests * add back set_attention_backend * update * update * make style * make fix-copies * make ip adapter processor compatible with attention dispatcher * refactor chroma as well * remove rmsnorm assert * minify and deprecate npu/xla processors * update * refactor * refactor; support flash attention 2 with cp * fix * support sage attention with cp * make torch compile compatible * update * refactor * update * refactor * refactor * add ulysses backward * try to make dreambooth script work; accelerator backward not playing well * Revert "try to make dreambooth script work; accelerator backward not playing well" This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449. * workaround compilation problems with triton when doing all-to-all * support wan * handle backward correctly * support qwen * support ltx * make fix-copies * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update docs * add explanation * make fix-copies * add docstrings * support passing parallel_config to from_pretrained * apply review suggestions * make style * update * Update docs/source/en/api/parallel.md Co-authored-by:
Aryan <aryan@huggingface.co> * up --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
sayakpaul <spsayakpaul@gmail.com>
-
Alberto Chimenti authored
Fixed WanVACEPipeline to allow prompt to be None and skip encoding step
-
Yao Matrix authored
Signed-off-by:Yao, Matrix <matrix.yao@intel.com>
-
Yao Matrix authored
Signed-off-by:Yao, Matrix <matrix.yao@intel.com>
-
Sayak Paul authored
disable xformer tests for pipelines it isn't popular.
-
Dhruv Nair authored
* update * update * update
-
Sayak Paul authored
* single scheduler please. * up * up * up
-
- 23 Sep, 2025 3 commits
-
-
Steven Liu authored
* init * feedback * update * feedback * fixes
-
Dhruv Nair authored
* update * update
-
Steven Liu authored
* init * toctree * scheduler suggestions * toctree
-
- 22 Sep, 2025 7 commits
-
-
SahilCarterr authored
* Fixes chroma docs * fix docs fixed docs are now consistent
-
Sayak Paul authored
* up * xfail some tests * up * up
-
Sayak Paul authored
* factor out the overlaps in save_lora_weights(). * remove comment. * remove comment. * up * fix-copies
-
SahilCarterr authored
* FIxes enable_xformers_memory_efficient_attention() * Update attention.py
-
Chen Mingyi authored
-
Sayak Paul authored
xfail some kandinsky tests.
-
Jason Cox authored
* Upgrade huggingface-hub to version 0.35.0 Updated huggingface-hub version from 0.26.1 to 0.35.0. * Add uvicorn and accelerate to requirements * Fix install instructions for server
-
- 21 Sep, 2025 1 commit
-
-
naykun authored
* feat: add support of qwenimageeditplus * add copies statement * fix copies statement * remove vl_processor reference
-
- 20 Sep, 2025 1 commit
-
-
Dhruv Nair authored
update
-