"official/nlp/__init__.py" did not exist on "7cfb6bbdd4e4954a809d0f2e339a6c05b49c1e9c"
- 14 Feb, 2023 1 commit
-
-
Patrick von Platen authored
replace type hints
-
- 07 Feb, 2023 1 commit
-
-
Patrick von Platen authored
* before running make style * remove left overs from flake8 * finish * make fix-copies * final fix * more fixes
-
- 30 Dec, 2022 1 commit
-
-
Patrick von Platen authored
* move files a bit * more refactors * fix more * more fixes * fix more onnx * make style * upload * fix * up * fix more * up again * up * small fix * Update src/diffusers/__init__.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * correct Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 22 Nov, 2022 1 commit
-
-
regisss authored
-
- 07 Nov, 2022 2 commits
-
-
Alex McKinney authored
* adds image to image inpainting with `PIL.Image.Image` inputs the base implementation claims to support `torch.Tensor` but seems it would also fail in this case. * `make style` and `make quality` * updates community examples readme Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Duong A. Nguyen authored
fix typo
-
- 04 Nov, 2022 1 commit
-
-
Pi Esposito authored
* add enable sequential cpu offloading to other stable diffusion pipelines * trigger ci * fix styling * interpolate before converting to device to avoid breking when cpu_offload is enabled with fp16 Co-authored-by:
Pedro Gengo <pedro.gabriel.lourenco@hotmail.com> * style again I need to stop forgething this thing * fix inpainting bug that could cause device misalignment Co-authored-by:
Pedro Gengo <pedro.gabriel.lourenco@hotmail.com> * Apply suggestions from code review Co-authored-by:
Pedro Gengo <pedro.gabriel.lourenco@hotmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 03 Nov, 2022 1 commit
-
-
Pedro Cuenca authored
* remove batch size from repeat * repeat empty string if uncond_tokens is none * fix inpaint pipes * return back whitespace to pass code quality * Apply suggestions from code review * Fix typos. Co-authored-by:Had <had-95@yandex.ru>
-
- 02 Nov, 2022 1 commit
-
-
MatthieuTPHR authored
* 2x speedup using memory efficient attention * remove einops dependency * Swap K, M in op instantiation * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter * make xformers a soft dependency * remove one-liner functions * change one letter variable to appropriate names * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method * Add memory efficient attention toggle to img2img and inpaint pipelines * Clearer management of xformers' availability * update optimizations markdown to add info about memory efficient attention * add benchmarks for TITAN RTX * More detailed explanation of how the mem eff benchmark were ran * Removing autocast from optimization markdown * import_utils: import torch only if is available Co-authored-by:Nouamane Tazi <nouamane98@gmail.com>
-
- 31 Oct, 2022 3 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
* [Better scheduler docs] Improve usage examples of schedulers * finish * fix warnings and add test * finish * more replacements * adapt fast tests hf token * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Integrate compatibility with euler Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
hlky authored
* k-diffusion-euler * make style make quality * make fix-copies * fix tests for euler a * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Update src/diffusers/schedulers/scheduling_euler_discrete.py Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * remove unused arg and method * update doc * quality * make flake happy * use logger instead of warn * raise error instead of deprication * don't require scipy * pass generator in step * fix tests * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update tests/test_scheduler.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * remove unused generator * pass generator as extra_step_kwargs * update tests * pass generator as kwarg * pass generator as kwarg * quality * fix test for lms * fix tests Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 26 Oct, 2022 1 commit
-
-
Hu Ye authored
-
- 24 Oct, 2022 1 commit
-
-
apolinario authored
* Update README.md Additionally add FLAX so the model card can be slimmer and point to this page * Find and replace all * v-1-5 -> v1-5 * revert test changes * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/quicktour.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update README.md Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Revert certain references to v1-5 * Docs changes * Apply suggestions from code review Co-authored-by:
apolinario <joaopaulo.passos+multimodal@gmail.com> Co-authored-by:
anton-l <anton@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 19 Oct, 2022 2 commits
-
-
Suraj Patil authored
* begin pipe * add new pipeline * add tests * correct fast test * up * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py * Update tests/test_pipelines.py * up * up * make style * add fp16 test * doc, comments * up Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
Patrick von Platen authored
* finish * up * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <anton@huggingface.co> * Update src/diffusers/pipeline_utils.py * Finish Co-authored-by:
Anton Lozhkov <anton@huggingface.co>
-
- 13 Oct, 2022 3 commits
-
-
Patrick von Platen authored
* up * finish * add more tests * up * up * finish
-
Patrick von Platen authored
* Give more customizable options for safety checker * Apply suggestions from code review * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py * Finish * make style * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Anton Lozhkov authored
-
- 07 Oct, 2022 1 commit
-
-
Suraj Patil authored
* handle dtype in vae and image2image pipeline * fix inpaint in fp16 * dtype should be handled in add_noise * style * address review comments * add simple fast tests to check fp16 * fix test name * put mask in fp16
-
- 06 Oct, 2022 1 commit
-
-
Suraj Patil authored
* compute text embeds per prompt * don't repeat uncond prompts * repeat separatly * update image2image * fix repeat uncond embeds * adapt inpaint pipeline * ifx uncond tokens in img2img * add tests and fix ucond embeds in im2img and inpaint pipe
-
- 05 Oct, 2022 3 commits
-
-
Anton Lozhkov authored
* init * improve add_noise * [debug start] run slow test * [debug end] * quick revert * Add docstrings and warnings + API tests * Make the warning less spammy
-
Kashif Rasul authored
* pytorch timesteps * style * get rid of if-else * fix test Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Yuta Hayashibe authored
* Avoid negative strides for tensors * Changed not to make torch.tensor * Removed a needless copy
-
- 04 Oct, 2022 1 commit
-
-
Yuta Hayashibe authored
* Add an argument "negative_prompt" * Fix argument order * Fix to use TypeError instead of ValueError * Removed needless batch_size multiplying * Fix to multiply by batch_size * Add truncation=True for long negative prompt * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Fix styles * Renamed ucond_tokens to uncond_tokens * Added description about "negative_prompt" Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 03 Oct, 2022 1 commit
-
-
Patrick von Platen authored
* [Utils] Add deprecate function * up * up * uP * up * up * up * up * uP * up * fix * up * move to deprecation utils file * fix * fix * fix more
-
- 02 Oct, 2022 1 commit
-
-
James R T authored
* Add callback parameters for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Lint code with `black --preview` Signed-off-by:
James R T <jamestiotio@gmail.com> * Refactor callback implementation for Stable Diffusion pipelines * Fix missing imports Signed-off-by:
James R T <jamestiotio@gmail.com> * Fix documentation format Signed-off-by:
James R T <jamestiotio@gmail.com> * Add kwargs parameter to standardize with other pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Modify Stable Diffusion pipeline callback parameters Signed-off-by:
James R T <jamestiotio@gmail.com> * Remove useless imports Signed-off-by:
James R T <jamestiotio@gmail.com> * Change types for timestep and onnx latents * Fix docstring style * Return decode_latents and run_safety_checker back into __call__ * Remove unused imports * Add intermediate state tests for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> * Fix intermediate state tests for Stable Diffusion pipelines Signed-off-by:
James R T <jamestiotio@gmail.com> Signed-off-by:
James R T <jamestiotio@gmail.com>
-
- 30 Sep, 2022 1 commit
-
-
Nouamane Tazi authored
* initial commit * make UNet stream capturable * try to fix noise_pred value * remove cuda graph and keep NB * non blocking unet with PNDMScheduler * make timesteps np arrays for pndm scheduler because lists don't get formatted to tensors in `self.set_format` * make max async in pndm * use channel last format in unet * avoid moving timesteps device in each unet call * avoid memcpy op in `get_timestep_embedding` * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained` * update TODO * replace `channels_last` kwarg with `memory_format` for more generality * revert the channels_last changes to leave it for another PR * remove non_blocking when moving input ids to device * remove blocking from all .to() operations at beginning of pipeline * fix merging * fix merging * model can run in other precisions without autocast * attn refactoring * Revert "attn refactoring" This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb. * remove restriction to run conv_norm in fp32 * use `baddbmm` instead of `matmul`for better in attention for better perf * removing all reshapes to test perf * Revert "removing all reshapes to test perf" This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd. * add shapes comments * hardcore whats needed for jitting * Revert "hardcore whats needed for jitting" This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a. * Revert "remove restriction to run conv_norm in fp32" This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9. * revert using baddmm in attention's forward * cleanup comment * remove restriction to run conv_norm in fp32. no quality loss was noticed This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213. * add more optimizations techniques to docs * Revert "add shapes comments" This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4. * apply suggestions * make quality * apply suggestions * styling * `scheduler.timesteps` are now arrays so we dont need .to() * remove useless .type() * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms` * move scheduler timestamps to correct device if tensors * add device to `set_timesteps` in LMSD scheduler * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it * quick fix * styling * remove kwargs from schedulers `set_timesteps` * revert to using max in K-LMS inpaint pipeline test * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it" This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899. * move timesteps to correct device before loop in SD pipeline * apply previous fix to other SD pipelines * UNet now accepts tensor timesteps even on wrong device, to avoid errors - it shouldnt affect performance if timesteps are alrdy on correct device - it does slow down performance if they're on the wrong device * fix pipeline when timesteps are arrays with strides
-
- 27 Sep, 2022 2 commits
-
-
Kashif Rasul authored
* pytorch only schedulers * fix style * remove match_shape * pytorch only ddpm * remove SchedulerMixin * remove numpy from karras_ve * fix types * remove numpy from lms_discrete * remove numpy from pndm * fix typo * remove mixin and numpy from sde_vp and ve * remove remaining tensor_format * fix style * sigmas has to be torch tensor * removed set_format in readme * remove set format from docs * remove set_format from pipelines * update tests * fix typo * continue to use mixin * fix imports * removed unsed imports * match shape instead of assuming image shapes * remove import typo * update call to add_noise * use math instead of numpy * fix t_index * removed commented out numpy tests * timesteps needs to be discrete * cast timesteps to int in flax scheduler too * fix device mismatch issue * small fix * Update src/diffusers/schedulers/scheduling_pndm.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Yuta Hayashibe authored
* Return encoded texts by DiffusionPipelines * Updated README to show hot to use enoded_text_input * Reverted examples in README.md * Reverted all * Warning for long prompts * Fix bugs * Formatted
-
- 23 Sep, 2022 1 commit
-
-
Ryan Russell authored
* refactor: pipelines readability improvements Signed-off-by:
Ryan Russell <git@ryanrussell.org> * docs: remove todo comment from flax pipeline Signed-off-by:
Ryan Russell <git@ryanrussell.org> Signed-off-by:
Ryan Russell <git@ryanrussell.org>
-
- 20 Sep, 2022 1 commit
-
-
Anton Lozhkov authored
* Add the K-LMS scheduler to the inpainting pipeline + tests * Remove redundant casts
-
- 19 Sep, 2022 1 commit
-
-
Yuta Hayashibe authored
* Fix a setting bug * Fix typos * Reverted params to parms
-
- 17 Sep, 2022 1 commit
-
-
Jonatan Kłosko authored
* Unify offset configuration in DDIM and PNDM schedulers * Format Add missing variables * Fix pipeline test * Update src/diffusers/schedulers/scheduling_ddim.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Default set_alpha_to_one to false * Format * Add tests * Format * add deprecation warning Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Sep, 2022 2 commits
-
-
Suraj Patil authored
* accept tensors * fix mask handling * make device placement cleaner * update doc for mask image
-
Yuta Hayashibe authored
* Fix typos * Add a typo check action * Fix a bug * Changed to manual typo check currently Ref: https://github.com/huggingface/diffusers/pull/483#pullrequestreview-1104468010 Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com> * Removed a confusing message * Renamed "nin_shortcut" to "in_shortcut" * Add memo about NIN Co-authored-by:
Anton Lozhkov <aglozhkov@gmail.com>
-
- 13 Sep, 2022 1 commit
-
-
Pedro Cuenca authored
Fix `disable_attention_slicing` in pipelines.
-
- 08 Sep, 2022 3 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
* [Outputs] Improve syntax * improve more * fix docstring return * correct all * uP Co-authored-by:Mishig Davaadorj <dmishig@gmail.com>
-
Pedro Cuenca authored
* Initial version of `fp16` page. * Fix typo in README. * Change titles of fp16 section in toctree. * PR suggestion Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * PR suggestion Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Clarify attention slicing is useful even for batches of 1 Explained by @patrickvonplaten after a suggestion by @keturn. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Do not talk about `batches` in `enable_attention_slicing`. * Use Tip (just for fun), add link to method. * Comment about fp16 results looking the same as float32 in practice. * Style: docstring line wrapping. Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-