- 13 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* [SD] Make sure batched input works correctly * uP * uP * up * up * uP * up * fix mask stuff * up * uP * more up * up * uP * up * finish * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 12 Dec, 2022 1 commit
-
-
Kangfu Mei authored
* fix bug if we don't do_classifier_free_guidance * update for copied diffusers.pipelines..alt_diffusion..pipeline_alt_diffusion.AltDiffusionPipeline
-
- 05 Dec, 2022 2 commits
-
-
Pedro Cuenca authored
* Fix typo in pipeline_stable_diffusion.py Fixes a typo in a warning message * Fix copies. * Fix copies Co-authored-by:Scott <scott@scottinallca.ps>
-
Suraj Patil authored
* make attn slice recursive * remove set_attention_slice from blocks * fix copies * make enable_attention_slicing base class method of DiffusionPipeline * fix set_attention_slice * fix set_attention_slice * fix copies * add tests * up * up * up * update * up * uP Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Dec, 2022 2 commits
-
-
Patrick von Platen authored
* up * up * finish * finish * up * up * finish
-
Benjamin Lefaudeux authored
* Moving the mem efficiient attention activation to the top + recursive * black, too bad there's no pre-commit ? Co-authored-by:Benjamin Lefaudeux <benjamin@photoroom.com>
-
- 29 Nov, 2022 1 commit
-
-
Ilmari Heikkinen authored
* StableDiffusion: Decode latents separately to run larger batches * Move VAE sliced decode under enable_vae_sliced_decode and vae.enable_sliced_decode * Rename sliced_decode to slicing * fix whitespace * fix quality check and repository consistency * VAE slicing tests and documentation * API doc hooks for VAE slicing * reformat vae slicing tests * Skip VAE slicing for one-image batches * Documentation tweaks for VAE slicing Co-authored-by:Ilmari Heikkinen <ilmari@fhtr.org>
-
- 28 Nov, 2022 2 commits
-
-
Patrick von Platen authored
* Add heun * Finish first version of heun * remove bogus * finish * finish * improve * up * up * fix more * change progress bar * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * finish * up * up * up
-
Patrick von Platen authored
Remove kwargs from call
-
- 25 Nov, 2022 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
* up * uP
-
- 24 Nov, 2022 5 commits
-
-
Anton Lozhkov authored
* Support SD2 attention slicing * Support SD2 attention slicing * Add more copies * Use attn_num_head_channels in blocks * fix-copies * Update tests * fix imports
-
Patrick von Platen authored
* up * up * fix * uP * more fixes * up * uP * up * up * uP * fix final tests
-
Patrick von Platen authored
* Upscaling fixed * up * more fixes * fix * more fixes * finish again * up
-
Patrick von Platen authored
* Optional Components * uP * finish * finish * finish * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up * Update src/diffusers/pipeline_utils.py * improve Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Patrick von Platen authored
* fix * add test * fix test * uP * up * fix some tests
-
- 22 Nov, 2022 1 commit
-
-
regisss authored
-
- 17 Nov, 2022 1 commit
-
-
Patrick von Platen authored
-
- 15 Nov, 2022 1 commit
-
-
Patrick von Platen authored
* add conversion script for vae * up * up * some fixes * add text model * use the correct config * add docs * move model in it's own file * move model in its own file * pass attenion mask to text encoder * pass attn mask to uncond inputs * quality * fix image2image * add imag2image in init * fix import * fix one more import * fix import, dummy objetcs * fix copied from * up * finish Co-authored-by:patil-suraj <surajp815@gmail.com>
-
- 13 Nov, 2022 2 commits
-
-
Patrick von Platen authored
* finish * cleaner * more fixes * refactor * make fix copies * refactor cycle diffusion * finish * finish2 * Apply suggestions from code review
-
Patrick von Platen authored
* [Stable Diffusion] Fix padding / truncation * finish
-
- 09 Nov, 2022 3 commits
-
-
Patrick von Platen authored
* up * more fixes * fix * finalize * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * upload models * up
-
Anton Lozhkov authored
* Fix cpu offloading * get offloaded devices locally for SD pipelines
-
Nathan Lambert authored
add licenses
-
- 08 Nov, 2022 1 commit
-
-
Pedro Cuenca authored
* Schedulers: don't use float64 on mps * Test set_timesteps() on device (float schedulers). * SD pipeline: use device in set_timesteps. * SD in-painting pipeline: use device in set_timesteps. * Tests: fix mps crashes. * Skip test_load_pipeline_from_git on mps. Not compatible with float16. * Use device.type instead of str in Euler schedulers.
-
- 07 Nov, 2022 1 commit
-
-
Duong A. Nguyen authored
fix typo
-
- 06 Nov, 2022 1 commit
-
-
Cheng Lu authored
* add dpmsolver discrete pytorch scheduler * fix some typos in dpm-solver pytorch * add dpm-solver pytorch in stable-diffusion pipeline * add jax/flax version dpm-solver * change code style * change code style * add docs * add `add_noise` method for dpmsolver * add pytorch unit test for dpmsolver * add dummy object for pytorch dpmsolver * Update src/diffusers/schedulers/scheduling_dpmsolver_discrete.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update tests/test_config.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update tests/test_config.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * resolve the code comments * rename the file * change class name * fix code style * add auto docs for dpmsolver multistep * add more explanations for the stabilizing trick (for steps < 15) * delete the dummy file * change the API name of predict_epsilon, algorithm_type and solver_type * add compatible lists Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 03 Nov, 2022 1 commit
-
-
Pedro Cuenca authored
* remove batch size from repeat * repeat empty string if uncond_tokens is none * fix inpaint pipes * return back whitespace to pass code quality * Apply suggestions from code review * Fix typos. Co-authored-by:Had <had-95@yandex.ru>
-
- 02 Nov, 2022 1 commit
-
-
MatthieuTPHR authored
* 2x speedup using memory efficient attention * remove einops dependency * Swap K, M in op instantiation * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter * make xformers a soft dependency * remove one-liner functions * change one letter variable to appropriate names * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method * Add memory efficient attention toggle to img2img and inpaint pipelines * Clearer management of xformers' availability * update optimizations markdown to add info about memory efficient attention * add benchmarks for TITAN RTX * More detailed explanation of how the mem eff benchmark were ran * Removing autocast from optimization markdown * import_utils: import torch only if is available Co-authored-by:Nouamane Tazi <nouamane98@gmail.com>
-
- 31 Oct, 2022 3 commits
-
-
Patrick von Platen authored
* [Better scheduler docs] Improve usage examples of schedulers * finish * fix warnings and add test * finish * more replacements * adapt fast tests hf token * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Integrate compatibility with euler Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
hlky authored
* k-diffusion-euler * make style make quality * make fix-copies * fix tests for euler a * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * remove unused arg and method * update doc * quality * make flake happy * use logger instead of warn * raise error instead of deprication * don't require scipy * pass generator in step * fix tests * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update tests/test_scheduler.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove unused generator * pass generator as extra_step_kwargs * update tests * pass generator as kwarg * pass generator as kwarg * quality * fix test for lms * fix tests Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
Allow None safety_checker when using CPU offload.
-
- 28 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* up * up * up * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * Apply suggestions from code review
-
- 27 Oct, 2022 2 commits
-
-
Pi Esposito authored
* document cpu offloading method * address review comments Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
* [Accelerate model loading] Fix meta device and super low memory usage * better naming
-
- 26 Oct, 2022 1 commit
-
-
Pi Esposito authored
* add method to enable cuda with minimal gpu usage to stable diffusion * add test to minimal cuda memory usage * ensure all models but unet are onn torch.float32 * move to cpu_offload along with minor internal changes to make it work * make it test against accelerate master branch * coming back, its official: I don't know how to make it test againt the master branch from accelerate * make it install accelerate from master on tests * go back to accelerate>=0.11 * undo prettier formatting on yml files * undo prettier formatting on yml files againn
-
- 25 Oct, 2022 1 commit
-
-
Pedro Cuenca authored
* Docs: refer to pre-RC version of PyTorch 1.13.0. * Remove temporary workaround for unavailable op. * Update comment to make it less ambiguous. * Remove use of contiguous in mps. It appears to not longer be necessary. * Special case: use einsum for much better performance in mps * Update mps docs. * Minor doc update. * Accept suggestion Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
- 24 Oct, 2022 1 commit
-
-
apolinario authored
* Update README.md Additionally add FLAX so the model card can be slimmer and point to this page * Find and replace all * v-1-5 -> v1-5 * revert test changes * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Revert certain references to v1-5 * Docs changes * Apply suggestions from code review Co-authored-by:
apolinario <joaopaulo.passos+multimodal@gmail.com> Co-authored-by:
anton-l <anton@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 13 Oct, 2022 2 commits
-
-
Patrick von Platen authored
* up * finish * add more tests * up * up * finish
-
Patrick von Platen authored
* Give more customizable options for safety checker * Apply suggestions from code review * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * Finish * make style * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-