- 03 May, 2024 1 commit
-
-
HelloWorldBeginner authored
Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed. (#7816) * Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed. * fix check code quality * Decouple the NPU flash attention and make it an independent module. * add doc and unit tests for npu flash attention. --------- Co-authored-by:
mhh001 <mahonghao1@huawei.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 25 Feb, 2024 1 commit
-
-
Steven Liu authored
* updates * feedback
-
- 17 Feb, 2024 1 commit
-
-
Steven Liu authored
* first draft * fix path * fix path * i2vgen-xl * review * modelscopet2v * feedback
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 06 Dec, 2023 1 commit
-
-
Sayak Paul authored
* debug * from step * print * turn sigma a list * make str * init_noise_sigma * comment * remove prints * feat: introduce fused projections * change to a better name * no grad * device. * device * dtype * okay * print * more print * fix: unbind -> split * fix: qkv >-> k * enable disable * apply attention processor within the method * attn processors * _enable_fused_qkv_projections * remove print * add fused projection to vae * add todos. * add: documentation and cleanups. * add: test for qkv projection fusion. * relax assertions. * relax further * fix: docs * fix-copies * correct error message. * Empty-Commit * better conditioning on disable_fused_qkv_projections * check * check processor * bfloat16 computation. * check latent dtype * style * remove copy temporarily * cast latent to bfloat16 * fix: vae -> self.vae * remove print. * add _change_to_group_norm_32 * comment out stuff that didn't work * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * reflect patrick's suggestions. * fix imports * fix: disable call. * fix more * fix device and dtype * fix conditions. * fix more * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 13 Nov, 2023 1 commit
-
-
M. Tolga Cangöz authored
* Fix typos, update, add Copyright info, and trim trailing whitespaces * Update docs/source/en/api/loaders.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/models/autoencoder_tiny.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/api/models/autoencoder_tiny.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 18 Sep, 2023 1 commit
-
-
Ruoxi authored
* Implement `CustomDiffusionAttnProcessor2_0` * Doc-strings and type annotations for `CustomDiffusionAttnProcessor2_0`. (#1) * Update attnprocessor.md * Update attention_processor.py * Interops for `CustomDiffusionAttnProcessor2_0`. * Formatted `attention_processor.py`. * Formatted doc-string in `attention_processor.py` * Conditional CustomDiffusion2_0 for training example. * Remove unnecessary reference impl in comments. * Fix `save_attn_procs`.
-
- 26 Jul, 2023 1 commit
-
-
camenduru authored
* why mdx? * why mdx? * why mdx? * no x for kandinksy either --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Jun, 2023 1 commit
-
-
Sayak Paul authored
* feat: add lora attention processor for pt 2.0. * explicit context manager for SDPA. * switch to flash attention * make shapes compatible to work optimally with SDPA. * fix: circular import problem. * explicitly specify the flash attention kernel in sdpa * fall back to efficient attention context manager. * remove explicit dispatch. * fix: removed processor. * fix: remove optional from type annotation. * feat: make changes regarding LoRAAttnProcessor2_0. * remove confusing warning. * formatting. * relax tolerance for PT 2.0 * fix: loading message. * remove unnecessary logging. * add: entry to the docs. * add: network_alpha argument. * relax tolerance.
-
- 26 May, 2023 1 commit
-
-
Steven Liu authored
* add attnprocessor to docs * fix path to class * create separate page for attnprocessors * fix path * fix path for real * fill in docstrings * apply feedback * apply feedback
-