- 18 Jun, 2025 1 commit
-
-
Sayak Paul authored
change to 2025 licensing for remaining
-
- 01 May, 2025 1 commit
-
-
Sayak Paul authored
xfail recent pipeline tests for specific methods.
-
- 10 Apr, 2025 1 commit
-
-
Yao Matrix authored
Signed-off-by:Matrix Yao <matrix.yao@intel.com>
-
- 20 Feb, 2025 1 commit
-
-
Sayak Paul authored
* poc encode_prompt() tests * fix * updates. * fixes * fixes * updates * updates * updates * revert * updates * updates * updates * updates * remove SDXLOptionalComponentsTesterMixin. * remove tests that directly leveraged encode_prompt() in some way or the other. * fix imports. * remove _save_load * fixes * fixes * fixes * fixes
-
- 22 Jan, 2025 1 commit
-
-
Aryan authored
* update * update * make style * remove dynamo disable * add coauthor Co-Authored-By:
Dhruv Nair <dhruv.nair@gmail.com> * update * update * update * update mixin * add some basic tests * update * update * non_blocking * improvements * update * norm.* -> norm * apply suggestions from review * add example * update hook implementation to the latest changes from pyramid attention broadcast * deinitialize should raise an error * update doc page * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update docs * update * refactor * fix _always_upcast_modules for asym ae and vq_model * fix lumina embedding forward to not depend on weight dtype * refactor tests * add simple lora inference tests * _always_upcast_modules -> _precision_sensitive_module_patterns * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case * check layer dtypes in lora test * fix UNet1DModelTests::test_layerwise_upcasting_inference * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback * skip test in NCSNppModelTests * skip tests for AutoencoderTinyTests * skip tests for AutoencoderOobleckTests * skip tests for UNet1DModelTests - unsupported pytorch operations * layerwise_upcasting -> layerwise_casting * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support * add layerwise fp8 pipeline test * use xfail * Apply suggestions from code review Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass) * add note about memory consumption on tesla CI runner for failing test --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 21 Jan, 2025 1 commit
-
-
Fanli Lin authored
* initial comit * fix empty cache * fix one more * fix style * update device functions * update * update * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update src/diffusers/utils/testing_utils.py Co-authored-by:
hlky <hlky@hlky.ac> * Update tests/pipelines/controlnet/test_controlnet.py Co-authored-by:
hlky <hlky@hlky.ac> * with gc.collect * update * make style * check_torch_dependencies * add mps empty cache * bug fix * Apply suggestions from code review --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
- 14 Jan, 2025 1 commit
-
-
Marc Sun authored
* load and save dduf archive * style * switch to zip uncompressed * updates * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * first draft * remove print * switch to dduf_file for consistency * switch to huggingface hub api * fix log * add a basic test * Update src/diffusers/configuration_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * fix * fix variant * change saving logic * DDUF - Load transformers components manually (#10171) * update hfh version * Load transformers components manually * load encoder from_pretrained with state_dict * working version with transformers and tokenizer ! * add generation_config case * fix tests * remove saving for now * typing * need next version from transformers * Update src/diffusers/configuration_utils.py Co-authored-by:
Lucain <lucain@huggingface.co> * check path corectly * Apply suggestions from code review Co-authored-by:
Lucain <lucain@huggingface.co> * udapte * typing * remove check for subfolder * quality * revert setup changes * oups * more readable condition * add loading from the hub test * add basic docs. * Apply suggestions from code review Co-authored-by:
Lucain <lucain@huggingface.co> * add example * add * make functions private * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * minor. * fixes * fix * change the precdence of parameterized. * error out when custom pipeline is passed with dduf_file. * updates * fix * updates * fixes * updates * fix xfail condition. * fix xfail * fixes * sharded checkpoint compat * add test for sharded checkpoint * add suggestions * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * from suggestions * add class attributes to flag dduf tests * last one * fix logic * remove comment * revert changes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Lucain <lucain@huggingface.co> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 25 Jul, 2024 1 commit
-
-
Sayak Paul authored
* check for assertions. * update with correct slices. * okay * style * get it ready * update * update * update --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 24 May, 2024 1 commit
-
-
Tolga Cangöz authored
* Fix typos * Fix `pipe.enable_model_cpu_offload()` usage * Fix cpu offloading * Update numbers
-
- 29 Mar, 2024 1 commit
-
-
Dhruv Nair authored
* update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 28 Feb, 2024 1 commit
-
-
elucida authored
* move model helper function in pipeline to EfficiencyMixin --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Feb, 2024 2 commits
-
-
Sayak Paul authored
change to 2024
-
Sayak Paul authored
* attention_head_dim * debug * print more info * correct num_attention_heads behaviour * down_block_num_attention_heads -> num_attention_heads. * correct the image link in doc. * add: deprecation for num_attention_head * fix: test argument to use attention_head_dim * more fixes. * quality * address comments. * remove depcrecation.
-
- 31 Jan, 2024 1 commit
-
-
Sayak Paul authored
--------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-