- 24 Jan, 2025 2 commits
-
-
Wenhao Sun authored
* add pipeline_stable_diffusion_xl_attentive_eraser * add pipeline_stable_diffusion_xl_attentive_eraser_make_style * make style and add example output * update Docs Co-authored-by:
Other Contributor <a457435687@126.com> * add Oral Co-authored-by:
Other Contributor <a457435687@126.com> * update_review Co-authored-by:
Other Contributor <a457435687@126.com> * update_review_ms Co-authored-by:
Other Contributor <a457435687@126.com> --------- Co-authored-by:
Other Contributor <a457435687@126.com>
-
Sayak Paul authored
* feat: add a lora extraction script. * updates
-
- 23 Jan, 2025 7 commits
-
-
Yaniv Galron authored
We already set the unet to requires grad false at line 506 Co-authored-by:Aryan <aryan@huggingface.co>
-
hlky authored
* Add IP-Adapter example to Flux docs * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Raul Ciotescu authored
vars mixed-up
-
Steven Liu authored
* uv * feedback
-
Sayak Paul authored
fix image path in para attention docs
-
Sayak Paul authored
* fixes * fixes * fixes * updates
-
kahmed10 authored
add onnxruntime-migraphx to import_utils.py Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 22 Jan, 2025 3 commits
-
-
Dhruv Nair authored
update
-
Aryan authored
improve error message
-
Aryan authored
* update * update * make style * remove dynamo disable * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * update * update * update * update mixin * add some basic tests * update * update * non_blocking * improvements * update * norm.* -> norm * apply suggestions from review * add example * update hook implementation to the latest changes from pyramid attention broadcast * deinitialize should raise an error * update doc page * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update docs * update * refactor * fix _always_upcast_modules for asym ae and vq_model * fix lumina embedding forward to not depend on weight dtype * refactor tests * add simple lora inference tests * _always_upcast_modules -> _precision_sensitive_module_patterns * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case * check layer dtypes in lora test * fix UNet1DModelTests::test_layerwise_upcasting_inference * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback * skip test in NCSNppModelTests * skip tests for AutoencoderTinyTests * skip tests for AutoencoderOobleckTests * skip tests for UNet1DModelTests - unsupported pytorch operations * layerwise_upcasting -> layerwise_casting * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support * add layerwise fp8 pipeline test * use xfail * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass) * add note about memory consumption on tesla CI runner for failing test --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 21 Jan, 2025 6 commits
-
-
Lucain authored
-
YiYi Xu authored
* add * style
-
Fanli Lin authored
* initial comit * fix empty cache * fix one more * fix style * update device functions * update * update * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * with gc.collect * update * make style * check_torch_dependencies * add mps empty cache * bug fix * Apply suggestions from code review --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
Muyang Li authored
Remove the FP32 Wrapper Co-authored-by:Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
-
jiqing-feng authored
* enable dreambooth_lora on other devices Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * enable xpu Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * check cuda device before empty cache Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * fix comment Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> * import free_memory Signed-off-by:
jiqing-feng <jiqing.feng@intel.com> --------- Signed-off-by:
jiqing-feng <jiqing.feng@intel.com>
-
Sayak Paul authored
change licensing to 2025 from 2024.
-
- 20 Jan, 2025 2 commits
-
-
baymax591 authored
* bugfix for npu not support float64 * is_mps is_npu --------- Co-authored-by:
白超 <baichao19@huawei.com> Co-authored-by:
hlky <hlky@hlky.ac>
-
sunxunle authored
Signed-off-by:sunxunle <sunxunle@ampere.tech>
-
- 19 Jan, 2025 2 commits
-
-
Sayak Paul authored
set rest of the blocks with requires_grad False.
-
Shenghai Yuan authored
* Update __init__.py * add consisid * update consisid * update consisid * make style * make_style * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
hlky <hlky@hlky.ac> * add doc * make style * Rename consisid .md to consisid.md * Update geodiff_molecule_conformation.ipynb * Update geodiff_molecule_conformation.ipynb * Update geodiff_molecule_conformation.ipynb * Update demo.ipynb * Update pipeline_consisid.py * make fix-copies * Update docs/source/en/using-diffusers/consisid.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/consisid.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/using-diffusers/consisid.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update doc & pipeline code * fix typo * make style * update example * Update docs/source/en/using-diffusers/consisid.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update example * update example * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/pipelines/consisid/pipeline_consisid.py Co-authored-by:
hlky <hlky@hlky.ac> * update * add test and update * remove some changes from docs * refactor * fix * undo changes to examples * remove save/load and fuse methods * update * link hf-doc-img & make test extremely small * update * add lora * fix test * update * update * change expected_diff_max to 0.4 * fix typo * fix link * fix typo * update docs * update * remove consisid lora tests --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 16 Jan, 2025 8 commits
-
-
Juan Acevedo authored
* implementing flux on TPUs with ptxla * add xla flux attention class * run make style/quality * Update src/diffusers/models/attention_processor.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * Update src/diffusers/models/attention_processor.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * run style and quality --------- Co-authored-by:
Juan Acevedo <jfacevedo@google.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
Leo Jiang authored
* NPU adaption for RMSNorm * NPU adaption for RMSNorm --------- Co-authored-by:J石页 <jiangshuo9@h-partners.com>
-
C authored
* add para_attn_flux.md and para_attn_hunyuan_video.md * add enable_sequential_cpu_offload in para_attn_hunyuan_video.md * add comment * refactor * fix * fix * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * fix * update links * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * fix * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/para_attn.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
hlky authored
* use np.int32 in scheduling * test_add_noise_device * -np.int32, fixes
-
Daniel Regado authored
Update to diffusers ip_adapter ckpt
-
hlky authored
* Move buffers to device * add test * named_buffers
-
Junyu Chen authored
* autoencoder_dc tiling * add tiling and slicing support in SANA pipelines * create variables for padding length because the line becomes too long * add tiling and slicing support in pag SANA pipelines * revert changes to tile size * make style * add vae tiling test * fix SanaMultiscaleLinearAttention apply_quadratic_attention bf16 --------- Co-authored-by:Aryan <aryan@huggingface.co>
-
Daniel Regado authored
Added support for IP-Adapter
-
- 15 Jan, 2025 6 commits
-
-
Leo Jiang authored
Co-authored-by:J石页 <jiangshuo9@h-partners.com>
-
Sayak Paul authored
fix vae annotation in mochi pipeline
-
Sayak Paul authored
* add: test to check 8bit bnb quantized models work with lora loading. * Update tests/quantization/bnb/test_mixed_int8.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
Sayak Paul authored
* feat: support loading loras into 4bit quantized models. * updates * update * remove weight check.
-
Aryan authored
* update * update
-
Daniel Regado authored
* Added support for IP-Adapter * Added joint_attention_kwargs property
-
- 14 Jan, 2025 4 commits
-
-
Junsong Chen authored
* [Sana 4K] add 4K support for Sana * [Sana-4K] fix SanaPAGPipeline * add VAE automatically tiling function; * set clean_caption to False; * add warnings for VAE OOM. * style --------- Co-authored-by:yiyixuxu <yixu310@gmail.com>
-
Teriks authored
Co-authored-by:Teriks <Teriks@users.noreply.github.com>
-
Dhruv Nair authored
update
-
Marc Sun authored
* load and save dduf archive * style * switch to zip uncompressed * updates * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * first draft * remove print * switch to dduf_file for consistency * switch to huggingface hub api * fix log * add a basic test * Update src/diffusers/configuration_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix * fix variant * change saving logic * DDUF - Load transformers components manually (#10171) * update hfh version * Load transformers components manually * load encoder from_pretrained with state_dict * working version with transformers and tokenizer ! * add generation_config case * fix tests * remove saving for now * typing * need next version from transformers * Update src/diffusers/configuration_utils.py Co-authored-by:
Lucain <lucain@huggingface.co> * check path corectly * Apply suggestions from code review Co-authored-by:
Lucain <lucain@huggingface.co> * udapte * typing * remove check for subfolder * quality * revert setup changes * oups * more readable condition * add loading from the hub test * add basic docs. * Apply suggestions from code review Co-authored-by:
Lucain <lucain@huggingface.co> * add example * add * make functions private * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * minor. * fixes * fix * change the precdence of parameterized. * error out when custom pipeline is passed with dduf_file. * updates * fix * updates * fixes * updates * fix xfail condition. * fix xfail * fixes * sharded checkpoint compat * add test for sharded checkpoint * add suggestions * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * from suggestions * add class attributes to flag dduf tests * last one * fix logic * remove comment * revert changes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Lucain <lucain@huggingface.co> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-