- 16 Sep, 2024 1 commit
-
-
suzukimain authored
* [docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8 Updated documentation as runwayml/stable-diffusion-v1-5 has been removed from Huggingface. * Update docs/source/en/using-diffusers/inpaint.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Replace with stable-diffusion-v1-5/stable-diffusion-v1-5 * Update inpaint.md --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 08 Aug, 2024 1 commit
-
-
Steven Liu authored
* toctree * fix
-
- 20 May, 2024 1 commit
-
-
Jacob Marks authored
-
- 06 May, 2024 1 commit
-
-
Steven Liu authored
* combine * edits
-
- 25 Feb, 2024 1 commit
-
-
Steven Liu authored
* updates * feedback
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 09 Nov, 2023 1 commit
-
-
M. Tolga Cangöz authored
* Fix typos, update, trim trailing whitespace * Trim trailing whitespaces * Update docs/source/en/optimization/memory.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/memory.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update _toctree.yml * Update adapt_a_model.md * Reverse * Reverse * Reverse * Update dreambooth.md * Update instructpix2pix.md * Update lora.md * Update overview.md * Update t2i_adapters.md * Update text2image.md * Update text_inversion.md * Update create_dataset.md * Update create_dataset.md * Update create_dataset.md * Update create_dataset.md * Update coreml.md * Delete docs/source/en/training/create_dataset.md * Original create_dataset.md * Update create_dataset.md * Delete docs/source/en/training/create_dataset.md * Add original file * Delete docs/source/en/training/create_dataset.md * Add original one * Delete docs/source/en/training/text2image.md * Delete docs/source/en/training/instructpix2pix.md * Delete docs/source/en/training/dreambooth.md * Add original files --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 13 Sep, 2023 1 commit
-
-
Steven Liu authored
* refactor * update general optim sections * update more sections * few more updates * benchmark code
-
- 10 Aug, 2023 2 commits
-
-
Steven Liu authored
* add safetensors flag * apply review
-
Steven Liu authored
* remove attention slicing * apply feedback
-
- 26 Jul, 2023 1 commit
-
-
camenduru authored
* why mdx? * why mdx? * why mdx? * no x for kandinksy either --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 26 May, 2023 1 commit
-
-
Steven Liu authored
* doc fixes * fix latex * parenthesis on inside
-
- 15 May, 2023 1 commit
-
-
Pedro Cuenca authored
* Fix style rendering. * Fix typo
-
- 27 Apr, 2023 1 commit
-
-
Will Berman authored
* [docs] add notes for stateful model changes * Update docs/source/en/optimization/fp16.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * link to accelerate docs for discarding hooks --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 28 Mar, 2023 2 commits
-
-
dg845 authored
* Change the docs to use the parent DiffusionPipeline class when loading a checkpoint using from_pretrained() instead of a child class (e.g. StableDiffusionPipeline) where possible. * Run make style to fix style issues. * Change more docs to use DiffusionPipeline rather than a subclass. --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Sandeep authored
* Remove suggestion to use cuDNN benchmark in docs * removing the wrong line
-
- 20 Mar, 2023 1 commit
-
-
M. Tolga Cangöz authored
Fix typos
-
- 02 Mar, 2023 1 commit
-
-
Ilmari Heikkinen authored
* Tiled VAE for high-res text2img and img2img * vae tiling, fix formatting * enable_vae_tiling API and tests * tiled vae docs, disable tiling for images that would have only one tile * tiled vae tests, use channels_last memory format * tiled vae tests, use smaller test image * tiled vae tests, remove tiling test from fast tests * up * up * make style * Apply suggestions from code review * Apply suggestions from code review * Apply suggestions from code review * make style * improve naming * finish * apply suggestions * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up --------- Co-authored-by:
Ilmari Heikkinen <ilmari@fhtr.org> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 01 Mar, 2023 1 commit
-
-
Patrick von Platen authored
-
- 16 Feb, 2023 1 commit
-
-
Pedro Cuenca authored
* enable_model_offload PoC It's surprisingly more involved than expected, see comments in the PR. * Rename final_offload_hook * Invoke the vae forward hook manually. * Completely remove decoder. * Style * apply_forward_hook decorator * Rename method. * Style * Copy enable_model_cpu_offload * Fix copies. * Remove comment. * Fix copies * Missing import * Fix doc-builder style. * Merge main and fix again. * Add docs * Fix docs. * Add a couple of tests. * style
-
- 07 Feb, 2023 1 commit
-
-
Patrick von Platen authored
* before running make style * remove left overs from flake8 * finish * make fix-copies * final fix * more fixes
-
- 17 Jan, 2023 1 commit
-
-
Patrick von Platen authored
no more autocast
-
- 12 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [CPU offload] correct cpu offload * [CPU offload] correct cpu offload * finish * finish * Update docs/source/en/optimization/fp16.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 04 Jan, 2023 1 commit
-
-
Chanran Kim authored
* init for korean docs * edit build yml file for multi language docs * edit one more build yml file for multi language docs * add title for get_frontmatter error
-
- 19 Dec, 2022 1 commit
-
-
Patrick von Platen authored
-
- 16 Dec, 2022 1 commit
-
-
Pedro Cuenca authored
* Fix links to flash attention. * Add xformers installation instructions. * Make link to xformers install more prominent. * Link to xformers install from training docs.
-
- 29 Nov, 2022 1 commit
-
-
Ilmari Heikkinen authored
* StableDiffusion: Decode latents separately to run larger batches * Move VAE sliced decode under enable_vae_sliced_decode and vae.enable_sliced_decode * Rename sliced_decode to slicing * fix whitespace * fix quality check and repository consistency * VAE slicing tests and documentation * API doc hooks for VAE slicing * reformat vae slicing tests * Skip VAE slicing for one-image batches * Documentation tweaks for VAE slicing Co-authored-by:Ilmari Heikkinen <ilmari@fhtr.org>
-
- 02 Nov, 2022 1 commit
-
-
MatthieuTPHR authored
* 2x speedup using memory efficient attention * remove einops dependency * Swap K, M in op instantiation * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter * make xformers a soft dependency * remove one-liner functions * change one letter variable to appropriate names * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method * Add memory efficient attention toggle to img2img and inpaint pipelines * Clearer management of xformers' availability * update optimizations markdown to add info about memory efficient attention * add benchmarks for TITAN RTX * More detailed explanation of how the mem eff benchmark were ran * Removing autocast from optimization markdown * import_utils: import torch only if is available Co-authored-by:Nouamane Tazi <nouamane98@gmail.com>
-
- 29 Oct, 2022 1 commit
-
-
Minwoo Byeon authored
-
- 27 Oct, 2022 1 commit
-
-
Pi Esposito authored
* document cpu offloading method * address review comments Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Oct, 2022 1 commit
-
-
apolinario authored
* Update README.md Additionally add FLAX so the model card can be slimmer and point to this page * Find and replace all * v-1-5 -> v1-5 * revert test changes * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Revert certain references to v1-5 * Docs changes * Apply suggestions from code review Co-authored-by:
apolinario <joaopaulo.passos+multimodal@gmail.com> Co-authored-by:
anton-l <anton@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 05 Oct, 2022 2 commits
-
-
Patrick von Platen authored
up
-
Patrick von Platen authored
* up * uP * uP * make style * Apply suggestions from code review * up * finish
-
- 04 Oct, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix typos * Update examples/dreambooth/train_dreambooth.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 30 Sep, 2022 2 commits
-
-
Nouamane Tazi authored
-
Nouamane Tazi authored
* initial commit * make UNet stream capturable * try to fix noise_pred value * remove cuda graph and keep NB * non blocking unet with PNDMScheduler * make timesteps np arrays for pndm scheduler because lists don't get formatted to tensors in `self.set_format` * make max async in pndm * use channel last format in unet * avoid moving timesteps device in each unet call * avoid memcpy op in `get_timestep_embedding` * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained` * update TODO * replace `channels_last` kwarg with `memory_format` for more generality * revert the channels_last changes to leave it for another PR * remove non_blocking when moving input ids to device * remove blocking from all .to() operations at beginning of pipeline * fix merging * fix merging * model can run in other precisions without autocast * attn refactoring * Revert "attn refactoring" This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb. * remove restriction to run conv_norm in fp32 * use `baddbmm` instead of `matmul`for better in attention for better perf * removing all reshapes to test perf * Revert "removing all reshapes to test perf" This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd. * add shapes comments * hardcore whats needed for jitting * Revert "hardcore whats needed for jitting" This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a. * Revert "remove restriction to run conv_norm in fp32" This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9. * revert using baddmm in attention's forward * cleanup comment * remove restriction to run conv_norm in fp32. no quality loss was noticed This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213. * add more optimizations techniques to docs * Revert "add shapes comments" This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4. * apply suggestions * make quality * apply suggestions * styling * `scheduler.timesteps` are now arrays so we dont need .to() * remove useless .type() * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms` * move scheduler timestamps to correct device if tensors * add device to `set_timesteps` in LMSD scheduler * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it * quick fix * styling * remove kwargs from schedulers `set_timesteps` * revert to using max in K-LMS inpaint pipeline test * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it" This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899. * move timesteps to correct device before loop in SD pipeline * apply previous fix to other SD pipelines * UNet now accepts tensor timesteps even on wrong device, to avoid errors - it shouldnt affect performance if timesteps are alrdy on correct device - it does slow down performance if they're on the wrong device * fix pipeline when timesteps are arrays with strides
-
- 08 Sep, 2022 1 commit
-
-
Pedro Cuenca authored
* Initial version of `fp16` page. * Fix typo in README. * Change titles of fp16 section in toctree. * PR suggestion Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * PR suggestion Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Clarify attention slicing is useful even for batches of 1 Explained by @patrickvonplaten after a suggestion by @keturn. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Do not talk about `batches` in `enable_attention_slicing`. * Use Tip (just for fun), add link to method. * Comment about fp16 results looking the same as float32 in practice. * Style: docstring line wrapping. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 07 Sep, 2022 1 commit
-
-
Patrick von Platen authored
-
- 13 Jul, 2022 1 commit
-
-
Nathan Lambert authored
* first pass at docs structure * minor reformatting, add github actions for docs * populate docs (primarily from README, some writing)
-