- 01 May, 2023 1 commit
-
-
Ilia Larchenko authored
A pipeline object stores the results in `images` not in `sample`. Current code blocks don't work.
-
- 28 Apr, 2023 1 commit
-
-
clarencechen authored
* Update Pix2PixZero Auto-correlation Loss * Add Stable Diffusion DiffEdit pipeline * Add draft documentation and import code * Bugfixes and refactoring * Add option to not decode latents in the inversion process * Harmonize preprocessing * Revert "Update Pix2PixZero Auto-correlation Loss" This reverts commit b218062fed08d6cc164206d6cb852b2b7b00847a. * Update annotations * rename `compute_mask` to `generate_mask` * Update documentation * Update docs * Update Docs * Fix copy * Change shape of output latents to batch first * Update docs * Add first draft for tests * Bugfix and update tests * Add `cross_attention_kwargs` support for all pipeline methods * Fix Copies * Add support for PIL image latents Add support for mask broadcasting Update docs and tests Align `mask` argument to `mask_image` Remove height and width arguments * Enable MPS Tests * Move example docstrings * Fix test * Fix test * fix pipeline inheritance * Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline * Register modules set to `None` in config for `test_save_load_optional_components` * Move fixed logic to specific test class * Clean changes to other pipelines * Update new tests to coordinate with #2953 * Update slow tests for better results * Safety to avoid potential problems with torch.inference_mode * Add reference in SD Pipeline Overview * Fix tests again * Enforce determinism in noise for generate_mask * Fix copies * Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16` * Add LoraLoaderMixin and update `prepare_image_latents` * clean up repeat and reg * bugfix * Remove invalid args from docs Suppress spurious warning by repeating image before latent to mask gen
-
- 26 Apr, 2023 2 commits
-
-
Patrick von Platen authored
* Add all files * update * Make sure vae is memory efficient for PT 1 * make style
-
Patrick von Platen authored
* Add all files * update
-
- 25 Apr, 2023 2 commits
-
-
Patrick von Platen authored
* add * clean * up * clean up more * fix more tests * Improve docs further * improve * more fixes docs * Improve docs more * Update src/diffusers/models/unet_2d_condition.py * fix * up * update doc links * make fix-copies * add safety checker and watermarker to stage 3 doc page code snippets * speed optimizations docs * memory optimization docs * make style * add watermarking snippets to doc string examples * make style * use pt_to_pil helper functions in doc strings * skip mps tests * Improve safety * make style * new logic * fix * fix bad onnx design * make new stable diffusion upscale pipeline model arguments optional * define has_nsfw_concept when non-pil output type * lowercase linked to notebook name --------- Co-authored-by:William Berman <WLBberman@gmail.com>
-
Isaac authored
adding enable_vae_tiling and disable_val_tiling functions
-
- 21 Apr, 2023 1 commit
-
-
Patrick von Platen authored
* Add model offload to x4 upscaler * fix
-
- 20 Apr, 2023 1 commit
-
-
clarencechen authored
* Update Pix2PixZero Auto-correlation Loss * Add fast inversion tests * Clarify purpose and mark as deprecated Fix inversion prompt broadcasting * Register modules set to `None` in config for `test_save_load_optional_components` * Update new tests to coordinate with #2953
-
- 19 Apr, 2023 3 commits
-
-
hwuebben authored
* Update pipeline_stable_diffusion_inpaint_legacy.py * fix preprocessing of Pil images with adequate batch size * revert map * add tests * reformat * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * next try to fix the style * wth is this * Update testing_utils.py * Update testing_utils.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
1lint authored
* add mixin class for pipeline from original sd ckpt * Improve * make style * merge main into * Improve more * fix more * up * Apply suggestions from code review * finish docs * rename * make style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
cmdr2 authored
* [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model * Address review comment from PR * PyLint formatting * Some more pylint fixes, unrelated to our change * Another pylint fix * Styling fix
-
- 17 Apr, 2023 2 commits
-
-
Patrick von Platen authored
Make sure correct timesteps are chosen for img2img
-
Patrick von Platen authored
Fix img2img processor with safety checker
-
- 16 Apr, 2023 1 commit
-
-
Tommaso De Rossi authored
fix breaking change
-
- 14 Apr, 2023 2 commits
-
-
YiYi Xu authored
* fix default
-
Takuma Mori authored
* add guess mode (WIP) * fix uncond/cond order * support guidance_scale=1.0 and batch != 1 * remove magic coeff * add docstring * add intergration test * add document to controlnet.mdx * made the comments a bit more explanatory * fix table
-
- 13 Apr, 2023 2 commits
-
-
Joseph Coffland authored
Allow stable diffusion attend and excite pipeline to work with any size output image. Re: #2476, #2603
-
YiYi Xu authored
-
- 12 Apr, 2023 4 commits
-
-
Ernie Chu authored
* Fix a bug of pano when not doing CFG * enhance code quality * apply formatting. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sean Sube authored
* add support for prompt embeds to SD ONNX pipeline * fix up the pipeline copies * add prompt embeds param to other ONNX pipelines * fix up prompt embeds param for SD upscaling ONNX pipeline * add missing type annotations to ONNX pipes
-
Susung Hong authored
* Update index.mdx * Edit docs & add HF space link * Only change equation numbers in comments
-
Sayak Paul authored
* add: first draft for a better LoRA enabler. * make fix-copies. * feat: backward compatibility. * add: entry to the docs. * add: tests. * fix: docs. * fix: norm group test for UNet3D. * feat: add support for flat dicts. * add depcrcation message instead of warning.
-
- 11 Apr, 2023 4 commits
-
-
Will Berman authored
add group norm type to attention processor cross attention norm This lets the cross attention norm use both a group norm block and a layer norm block. The group norm operates along the channels dimension and requires input shape (batch size, channels, *) where as the layer norm with a single `normalized_shape` dimension only operates over the least significant dimension i.e. (*, channels). The channels we want to normalize are the hidden dimension of the encoder hidden states. By convention, the encoder hidden states are always passed as (batch size, sequence length, hidden states). This means the layer norm can operate on the tensor without modification, but the group norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length). All existing attention processors will have the same logic and we can consolidate it in a helper function `prepare_encoder_hidden_states` prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten move norm_cross defined check to outside norm_encoder_hidden_states add missing attn.norm_cross check
-
Pedro Cuenca authored
When doing generation manually and using guidance_scale as a static argument.
-
Will Berman authored
-
Patrick von Platen authored
* [Config] Fix config prints and save, load * Only use potential nn.Modules for dtype and device * Correct vae image processor * make sure in_channels is not accessed directly * make sure in channels is only accessed via config * Make sure schedulers only access config attributes * Make sure to access config in SAG * Fix vae processor and make style * add tests * uP * make style * Fix more naming issues * Final fix with vae config * change more
-
- 06 Apr, 2023 1 commit
-
-
cmdr2 authored
Update the K-Diffusion SD pipeline, to allow calling it with only prompt_embeds (instead of always requiring a prompt) (#2962)
-
- 04 Apr, 2023 1 commit
-
-
YiYi Xu authored
Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 31 Mar, 2023 4 commits
-
-
Patrick von Platen authored
-
Nipun Jindal authored
* [2884]: Fix cross_attention_kwargs in StableDiffusionImg2ImgPipeline * [Build Fix] * [Build Fix] --------- Co-authored-by:njindal <njindal@adobe.com>
-
Sandeep authored
* Remove suggestion to use cuDNN benchmark in docs * removing the wrong line * add support for embeds * fix line length
-
Takuma Mori authored
* add use_karras_sigmas option thanks @Stax124 * fix sigma_min/max from scheduler.sigmas * add docstring * revert to use k_diffusion_model.sigma, to(device) * add integration test * make style
-
- 30 Mar, 2023 1 commit
-
-
Pi Esposito authored
* add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add case for parsing embedding from auto1111 UI format Co-authored-by:
Evan Jones <evan.a.jones3@gmail.com> Co-authored-by:
Ana Tamais <aninhamoraestamais@gmail.com> * fix style after rebase * move textual inversion mixin to loaders * move mixin inheritance to DiffusionPipeline from StableDiffusionPipeline) * update dummy class name * addressed allo comments * fix old dangling import * fix style * proposal * remove bogus * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> * finish * make style * up * fix code quality * fix code quality - again * fix code quality - 3 * fix alt diffusion code quality * fix model editing pipeline * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Finish --------- Co-authored-by:
Evan Jones <evan.a.jones3@gmail.com> Co-authored-by:
Ana Tamais <aninhamoraestamais@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 28 Mar, 2023 5 commits
-
-
dg845 authored
Add warning in __init__ if user loads a checkpoint with pipeline.unet.config.in_channels other than 9.
-
cmdr2 authored
Update the legacy inpainting SD pipeline, to allow calling it with only prompt_embeds (instead of always requiring a prompt) (#2842) Fix error 'required positional argument: prompt' when Legacy Inpaint is called only with prompt_embeds
-
Li-Huai (Allan) Lin authored
* Remove duplicate sentence * format
-
Stax124 authored
* Allow user to disable SafetyChecker and enable dtypes if loading models from .ckpt or .safetensors * Fix Import sorting (Ruff error) * Get rid of the dtype convert method as it was implemented all along * Fix the docstring * Fix ruff formatting --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
* add: better warning messages when handling multiple conditioning. * fix: handling of controlnet_conditioning_scale
-
- 27 Mar, 2023 2 commits
-
-
Eugene Lyapustin authored
-
Pedro Cuenca authored
* Apply same ruff settings as in transformers See https://github.com/huggingface/transformers/blob/main/pyproject.toml Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com> * Apply new style rules * Style Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com> * style * remove list, ruff wouldn't auto fix. --------- Co-authored-by:
Aaron Gokaslan <aaronGokaslan@gmail.com>
-