- 01 Apr, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update
-
- 31 Mar, 2025 2 commits
-
-
kakukakujirori authored
* Bug fix in ltx * Assume packed latents. --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Aryan authored
* update * raise warning and round to nearest multiple of scale factor
-
- 29 Mar, 2025 1 commit
-
-
hlky authored
-
- 28 Mar, 2025 1 commit
-
-
hlky authored
* WanI2V encode_image
-
- 26 Mar, 2025 2 commits
-
-
kentdan3msu authored
Set self._hf_peft_config_loaded to True when LoRA is loaded using `load_lora_adapter` in PeftAdapterMixin class (#11155) set self._hf_peft_config_loaded to True on successful lora load Sets the `_hf_peft_config_loaded` flag if a LoRA is successfully loaded in `load_lora_adapter`. Fixes bug huggingface/diffusers/issues/11148 Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
* update * update * update * update
-
- 25 Mar, 2025 1 commit
-
-
Junsong Chen authored
-
- 24 Mar, 2025 2 commits
-
-
Aryan authored
* update * Update docs/source/en/optimization/memory.md * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Aryan authored
* update * update * update * add tests * update docs * raise value error * warning for true cfg and guidance scale * fix test
-
- 23 Mar, 2025 1 commit
-
-
Yuxuan Zhang authored
* 1 * change to channel 1 * cogview4 control training * add CacheMixin * 1 * remove initial_input_channels change for val * 1 * update * use 3.5 * new loss * 1 * use imagetoken * for megatron convert * 1 * train con and uc * 2 * remove guidance_scale * Update pipeline_cogview4_control.py * fix * use cogview4 pipeline with timestep * update shift_factor * remove the uncond * add max length * change convert and use GLMModel instead of GLMForCasualLM * fix * [cogview4] Add attention mask support to transformer model * [fix] Add attention mask for padded token * update * remove padding type * Update train_control_cogview4.py * resolve conflicts with #10981 * add control convert * use control format * fix * add missing import * update with cogview4 formate * make style * Update pipeline_cogview4_control.py * Update pipeline_cogview4_control.py * remove * Update pipeline_cogview4_control.py * put back * Apply style fixes --------- Co-authored-by:
OleehyO <leehy0357@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 21 Mar, 2025 4 commits
-
-
hlky authored
* Don't use `torch_dtype` when `quantization_config` is set * up * djkajka * Apply suggestions from code review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
YiYi Xu authored
* add sana-sprint --------- Co-authored-by:
Junsong Chen <cjs1020440147@icloud.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Aryan authored
* init * update * update * update * make style * update * fix * make it work with guidance distilled models * update * make fix-copies * add tests * update * apply_faster_cache -> apply_fastercache * fix * reorder * update * refactor * update docs * add fastercache to CacheMixin * update tests * Apply suggestions from code review * make style * try to fix partial import error * Apply style fixes * raise warning * update --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
CyberVy authored
* Update pipeline_controlnet_inpaint.py * Apply style fixes
-
- 20 Mar, 2025 5 commits
-
-
YiYi Xu authored
up
-
Dhruv Nair authored
* update * update * clean up
-
Fanli Lin authored
* enable bnb on xpu * add 2 more cases * add missing change * add missing change * add one more * enable cuda only tests on xpu * enable big gpu cases
-
hlky authored
* Flux img2img remote encode * Flux inpaint * -copied from
-
Junsong Chen authored
* fix bug when pixart-dmd inference with `num_inference_steps=1` * use return_dict=False and return [1] element for 1-step pixart model, which works for both lcm and dmd
-
- 19 Mar, 2025 2 commits
-
-
Fanli Lin authored
* enable bnb on xpu * add 2 more cases * add missing change * add missing change * add one more
-
Linoy Tsaban authored
* @hlky t2v->i2v * Apply style fixes * try with ones to not nullify layers * fix method name * revert to zeros * add check to state_dict keys * add comment * copies fix * Revert "copies fix" This reverts commit 051f534d185c0ea065bf36a9926c4b48f496d429. * remove copied from * Update src/diffusers/loaders/lora_pipeline.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/loaders/lora_pipeline.py Co-authored-by:
hlky <hlky@hlky.ac> * update * update * Update src/diffusers/loaders/lora_pipeline.py Co-authored-by:
hlky <hlky@hlky.ac> * Apply style fixes --------- Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Linoy <linoy@hf.co> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 18 Mar, 2025 6 commits
-
-
hlky authored
* Quality options in `export_to_video` * make style
-
Aryan authored
* update * update
-
Cheng Jin authored
Modify UNet's ResNet implementation to resolve stride mismatch in Torch's DDP
-
co63oc authored
* Fix pipeline_flux_controlnet.py * Fix style
-
Aryan authored
update
-
Aryan authored
* update --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 17 Mar, 2025 1 commit
-
-
C authored
* fix_wan_i2v_quality * Update src/diffusers/pipelines/wan/pipeline_wan_i2v.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/wan/pipeline_wan_i2v.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/pipelines/wan/pipeline_wan_i2v.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update pipeline_wan_i2v.py --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
- 15 Mar, 2025 2 commits
-
-
Yuxuan Zhang authored
* cogview4 control training --------- Co-authored-by:
OleehyO <leehy0357@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com>
-
Dimitri Barbot authored
Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 14 Mar, 2025 3 commits
-
-
Juan Acevedo authored
reverts accidental change that removes attn_mask in attn. Improves flux ptxla by using flash block sizes. Moves encoding outside the for loop. Co-authored-by:Juan Acevedo <jfacevedo@google.com>
-
Sayak Paul authored
feat: support non-diffusers wan t2v loras.
-
Sayak Paul authored
* restrict memory tests for quanto for certain schemes. * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * fixes * style --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 13 Mar, 2025 2 commits
-
-
ZhengKai91 authored
* get_1d_rotary_pos_embed support npu * Update src/diffusers/models/embeddings.py --------- Co-authored-by:
Kai zheng <kaizheng@KaideMacBook-Pro.local> Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
hlky authored
* Rename Lumina(2)Text2ImgPipeline -> Lumina(2)Pipeline --------- Co-authored-by:YiYi Xu <yixu310@gmail.com>
-
- 12 Mar, 2025 4 commits
-
-
Sayak Paul authored
* move to warning. * test related changes.
-
hlky authored
* Wan Pipeline scaling fix, type hint warning, multi generator fix * Apply suggestions from code review
-
hlky authored
* [hybrid inference
🍯 🐝 ] Add VAE encode * _toctree: add vae encode * Add endpoints, tests * vae_encode docs * vae encode benchmarks * api reference * changelog * Update docs/source/en/hybrid_inference/overview.md Co-authored-by:Sayak Paul <spsayakpaul@gmail.com> * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
hlky authored
-