- 02 Apr, 2025 12 commits
-
-
Dhruv Nair authored
* update * update * update
-
lakshay sharma authored
added onnxruntime-vitisai for custom build onnxruntime pkg
-
hlky authored
-
hlky authored
* Fix enable_sequential_cpu_offload in CogView4Pipeline * make fix-copies
-
hlky authored
-
hlky authored
-
Fanli Lin authored
* add xpu part * fix more cases * remove some cases * no canny * format fix
-
hlky authored
* allow models to run with a user-provided dtype map instead of a single dtype * make style * Add warning, change `_` to `default` * make style * add test * handle shared tensors * remove warning --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Bruno Magalhaes authored
* rewrite memory count without implicitly using dimensions by @ic-synth * replace F.pad by built-in padding in Conv3D * in-place sums to reduce memory allocations * fixed trailing whitespace * file reformatted * in-place sums * simpler in-place expressions * removed in-place sum, may affect backward propagation logic * removed in-place sum, may affect backward propagation logic * removed in-place sum, may affect backward propagation logic * reverted change
-
Eliseu Silva authored
fix: optional componentes verification on load
-
jiqing-feng authored
Signed-off-by:jiqing-feng <jiqing.feng@intel.com>
-
Yao Matrix authored
Signed-off-by:YAO Matrix <matrix.yao@intel.com>
-
- 01 Apr, 2025 2 commits
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update
-
Fanli Lin authored
no cuda only
-
- 31 Mar, 2025 4 commits
-
-
kakukakujirori authored
* Bug fix in ltx * Assume packed latents. --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
hlky authored
-
Mark authored
-
Aryan authored
* update * raise warning and round to nearest multiple of scale factor
-
- 29 Mar, 2025 1 commit
-
-
hlky authored
-
- 28 Mar, 2025 2 commits
-
-
Dhruv Nair authored
* update * update
-
hlky authored
* WanI2V encode_image
-
- 26 Mar, 2025 2 commits
-
-
kentdan3msu authored
Set self._hf_peft_config_loaded to True when LoRA is loaded using `load_lora_adapter` in PeftAdapterMixin class (#11155) set self._hf_peft_config_loaded to True on successful lora load Sets the `_hf_peft_config_loaded` flag if a LoRA is successfully loaded in `load_lora_adapter`. Fixes bug huggingface/diffusers/issues/11148 Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Dhruv Nair authored
* update * update * update * update
-
- 25 Mar, 2025 1 commit
-
-
Junsong Chen authored
-
- 24 Mar, 2025 4 commits
-
-
Aryan authored
* update * Update docs/source/en/optimization/memory.md * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * apply review suggestions * update --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Jun Yeop Na authored
* remove typo from korean controlnet train doc * removed more paragraphs to remain in sync with the english document
-
Aryan authored
* update * update * update * add tests * update docs * raise value error * warning for true cfg and guidance scale * fix test
-
Junsong Chen authored
* fix bug in sana conversion script; * add more model paths; --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 23 Mar, 2025 2 commits
-
-
Yuxuan Zhang authored
* 1 * change to channel 1 * cogview4 control training * add CacheMixin * 1 * remove initial_input_channels change for val * 1 * update * use 3.5 * new loss * 1 * use imagetoken * for megatron convert * 1 * train con and uc * 2 * remove guidance_scale * Update pipeline_cogview4_control.py * fix * use cogview4 pipeline with timestep * update shift_factor * remove the uncond * add max length * change convert and use GLMModel instead of GLMForCasualLM * fix * [cogview4] Add attention mask support to transformer model * [fix] Add attention mask for padded token * update * remove padding type * Update train_control_cogview4.py * resolve conflicts with #10981 * add control convert * use control format * fix * add missing import * update with cogview4 formate * make style * Update pipeline_cogview4_control.py * Update pipeline_cogview4_control.py * remove * Update pipeline_cogview4_control.py * put back * Apply style fixes --------- Co-authored-by:
OleehyO <leehy0357@gmail.com> Co-authored-by:
yiyixuxu <yixu310@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
Tolga Cangöz authored
* [Documentation] Update README and example code with additional usage instructions for AnyText * [Documentation] Update README for AnyTextPipeline and improve logging in code * Remove wget command for font file from example docstring in anytext.py
-
- 21 Mar, 2025 4 commits
-
-
hlky authored
* Don't use `torch_dtype` when `quantization_config` is set * up * djkajka * Apply suggestions from code review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
YiYi Xu authored
* add sana-sprint --------- Co-authored-by:
Junsong Chen <cjs1020440147@icloud.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
Aryan authored
* init * update * update * update * make style * update * fix * make it work with guidance distilled models * update * make fix-copies * add tests * update * apply_faster_cache -> apply_fastercache * fix * reorder * update * refactor * update docs * add fastercache to CacheMixin * update tests * Apply suggestions from code review * make style * try to fix partial import error * Apply style fixes * raise warning * update --------- Co-authored-by:github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
CyberVy authored
* Update pipeline_controlnet_inpaint.py * Apply style fixes
-
- 20 Mar, 2025 6 commits
-
-
Parag Ekbote authored
Add 4 Notebooks and update the missing links for the example README.
-
YiYi Xu authored
up
-
Dhruv Nair authored
* update * update * clean up
-
Fanli Lin authored
* enable bnb on xpu * add 2 more cases * add missing change * add missing change * add one more * enable cuda only tests on xpu * enable big gpu cases
-
hlky authored
* Flux img2img remote encode * Flux inpaint * -copied from
-
Junsong Chen authored
* fix bug when pixart-dmd inference with `num_inference_steps=1` * use return_dict=False and return [1] element for 1-step pixart model, which works for both lcm and dmd
-