- 28 Apr, 2023 1 commit
-
-
clarencechen authored
* Update Pix2PixZero Auto-correlation Loss * Add Stable Diffusion DiffEdit pipeline * Add draft documentation and import code * Bugfixes and refactoring * Add option to not decode latents in the inversion process * Harmonize preprocessing * Revert "Update Pix2PixZero Auto-correlation Loss" This reverts commit b218062fed08d6cc164206d6cb852b2b7b00847a. * Update annotations * rename `compute_mask` to `generate_mask` * Update documentation * Update docs * Update Docs * Fix copy * Change shape of output latents to batch first * Update docs * Add first draft for tests * Bugfix and update tests * Add `cross_attention_kwargs` support for all pipeline methods * Fix Copies * Add support for PIL image latents Add support for mask broadcasting Update docs and tests Align `mask` argument to `mask_image` Remove height and width arguments * Enable MPS Tests * Move example docstrings * Fix test * Fix test * fix pipeline inheritance * Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline * Register modules set to `None` in config for `test_save_load_optional_components` * Move fixed logic to specific test class * Clean changes to other pipelines * Update new tests to coordinate with #2953 * Update slow tests for better results * Safety to avoid potential problems with torch.inference_mode * Add reference in SD Pipeline Overview * Fix tests again * Enforce determinism in noise for generate_mask * Fix copies * Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16` * Add LoraLoaderMixin and update `prepare_image_latents` * clean up repeat and reg * bugfix * Remove invalid args from docs Suppress spurious warning by repeating image before latent to mask gen
-
- 27 Apr, 2023 4 commits
-
-
Robert Dargavel Smith authored
* config fixes * deprecate get_input_dims
-
Xie Zejian authored
-
apolinário authored
Co-authored-by:multimodalart <joaopaulo.passos+multimodal@gmail.com>
-
Isaac authored
* removed unnecessary parameters from get_up_block and get_down_block functions * adding resnet_skip_time_act, resnet_out_scale_factor and cross_attention_norm to get_up_block and get_down_block functions --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 26 Apr, 2023 3 commits
-
-
Patrick von Platen authored
* Post release * fix more
-
Patrick von Platen authored
* Add all files * update * Make sure vae is memory efficient for PT 1 * make style
-
Patrick von Platen authored
* Add all files * update
-
- 25 Apr, 2023 2 commits
-
-
Patrick von Platen authored
* add * clean * up * clean up more * fix more tests * Improve docs further * improve * more fixes docs * Improve docs more * Update src/diffusers/models/unet_2d_condition.py * fix * up * update doc links * make fix-copies * add safety checker and watermarker to stage 3 doc page code snippets * speed optimizations docs * memory optimization docs * make style * add watermarking snippets to doc string examples * make style * use pt_to_pil helper functions in doc strings * skip mps tests * Improve safety * make style * new logic * fix * fix bad onnx design * make new stable diffusion upscale pipeline model arguments optional * define has_nsfw_concept when non-pil output type * lowercase linked to notebook name --------- Co-authored-by:William Berman <WLBberman@gmail.com>
-
Isaac authored
adding enable_vae_tiling and disable_val_tiling functions
-
- 21 Apr, 2023 2 commits
-
-
Sanchit Gandhi authored
-
Patrick von Platen authored
* Add model offload to x4 upscaler * fix
-
- 20 Apr, 2023 1 commit
-
-
clarencechen authored
* Update Pix2PixZero Auto-correlation Loss * Add fast inversion tests * Clarify purpose and mark as deprecated Fix inversion prompt broadcasting * Register modules set to `None` in config for `test_save_load_optional_components` * Update new tests to coordinate with #2953
-
- 19 Apr, 2023 4 commits
-
-
superhero-7 authored
* Modified altdiffusion pipline to support altdiffusion-m18 * Modified altdiffusion pipline to support altdiffusion-m18 * Modified altdiffusion pipline to support altdiffusion-m18 * Modified altdiffusion pipline to support altdiffusion-m18 * Modified altdiffusion pipline to support altdiffusion-m18 * Modified altdiffusion pipline to support altdiffusion-m18 * Modified altdiffusion pipline to support altdiffusion-m18 --------- Co-authored-by:root <fulong_ye@163.com>
-
hwuebben authored
* Update pipeline_stable_diffusion_inpaint_legacy.py * fix preprocessing of Pil images with adequate batch size * revert map * add tests * reformat * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * next try to fix the style * wth is this * Update testing_utils.py * Update testing_utils.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py * Update test_stable_diffusion_inpaint_legacy.py --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
1lint authored
* add mixin class for pipeline from original sd ckpt * Improve * make style * merge main into * Improve more * fix more * up * Apply suggestions from code review * finish docs * rename * make style --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
cmdr2 authored
* [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model * Address review comment from PR * PyLint formatting * Some more pylint fixes, unrelated to our change * Another pylint fix * Styling fix
-
- 18 Apr, 2023 2 commits
-
-
Will Berman authored
This mimics the dtype cast for the standard time embeddings
-
Will Berman authored
Adding act fn config to the unet timestep class embedding and conv activation. The custom activation defaults to silu which is the default activation function for both the conv act and the timestep class embeddings so default behavior is not changed. The only unet which use the custom activation is the stable diffusion latent upscaler https://huggingface.co/stabilityai/sd-x2-latent-upscaler/blob/main/unet/config.json (I ran a script against the hub to confirm). The latent upscaler does not use the conv activation nor the timestep class embeddings so we don't change its behavior.
-
- 17 Apr, 2023 4 commits
-
-
Patrick von Platen authored
* Better deprecation message * Better deprecation message * Better doc string * Fixes * fix more * fix more * Improve __getattr__ * correct more * fix more * fix * Improve more * more improvements * fix more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * make style * Fix all rest & add tests & remove old deprecation fns --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Patrick von Platen authored
-
Patrick von Platen authored
Make sure correct timesteps are chosen for img2img
-
Patrick von Platen authored
Fix img2img processor with safety checker
-
- 16 Apr, 2023 1 commit
-
-
Tommaso De Rossi authored
fix breaking change
-
- 14 Apr, 2023 2 commits
-
-
YiYi Xu authored
* fix default
-
Takuma Mori authored
* add guess mode (WIP) * fix uncond/cond order * support guidance_scale=1.0 and batch != 1 * remove magic coeff * add docstring * add intergration test * add document to controlnet.mdx * made the comments a bit more explanatory * fix table
-
- 13 Apr, 2023 3 commits
-
-
Joseph Coffland authored
Allow stable diffusion attend and excite pipeline to work with any size output image. Re: #2476, #2603
-
Patrick von Platen authored
Throw deprecation warning
-
YiYi Xu authored
-
- 12 Apr, 2023 9 commits
-
-
Patrick von Platen authored
-
Andranik Movsisyan authored
* fix progress bar issue in pipeline_text_to_video_zero.py. Copy scheduler after first backward * fix tensor loading in test_text_to_video_zero.py * make style && make quality
-
Ernie Chu authored
* Fix a bug of pano when not doing CFG * enhance code quality * apply formatting. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Sayak Paul authored
* fix: norm group test for UNet3D. * refactor text-to-video zero docs.
-
Sean Sube authored
* add support for prompt embeds to SD ONNX pipeline * fix up the pipeline copies * add prompt embeds param to other ONNX pipelines * fix up prompt embeds param for SD upscaling ONNX pipeline * add missing type annotations to ONNX pipes
-
Will Berman authored
* fix pipeline __setattr__ * add test --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* add use_memory_efficient params placeholder * test * add memory efficient attention jax * add memory efficient attention jax * newline * forgot dot * Rename use_memory_efficient * Keep dtype last. * Actually use key_chunk_size * Rename symbol * Apply style * Rename use_memory_efficient * Keep dtype last * Pass `use_memory_efficient_attention` in `from_pretrained` * Move JAX memory efficient attention to attention_flax. * Simple test. * style --------- Co-authored-by:
muhammad_hanif <muhammad_hanif@sofcograha.co.id> Co-authored-by:
MuhHanif <48muhhanif@gmail.com>
-
Susung Hong authored
* Update index.mdx * Edit docs & add HF space link * Only change equation numbers in comments
-
Sayak Paul authored
* add: first draft for a better LoRA enabler. * make fix-copies. * feat: backward compatibility. * add: entry to the docs. * add: tests. * fix: docs. * fix: norm group test for UNet3D. * feat: add support for flat dicts. * add depcrcation message instead of warning.
-
- 11 Apr, 2023 2 commits
-
-
Will Berman authored
add AttnAddedKVProcessor2_0 block
-
Will Berman authored
add group norm type to attention processor cross attention norm This lets the cross attention norm use both a group norm block and a layer norm block. The group norm operates along the channels dimension and requires input shape (batch size, channels, *) where as the layer norm with a single `normalized_shape` dimension only operates over the least significant dimension i.e. (*, channels). The channels we want to normalize are the hidden dimension of the encoder hidden states. By convention, the encoder hidden states are always passed as (batch size, sequence length, hidden states). This means the layer norm can operate on the tensor without modification, but the group norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length). All existing attention processors will have the same logic and we can consolidate it in a helper function `prepare_encoder_hidden_states` prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten move norm_cross defined check to outside norm_encoder_hidden_states add missing attn.norm_cross check
-