1. 28 Apr, 2023 1 commit
    • clarencechen's avatar
      Diffedit Zero-Shot Inpainting Pipeline (#2837) · be0bfcec
      clarencechen authored
      * Update Pix2PixZero Auto-correlation Loss
      
      * Add Stable Diffusion DiffEdit pipeline
      
      * Add draft documentation and import code
      
      * Bugfixes and refactoring
      
      * Add option to not decode latents in the inversion process
      
      * Harmonize preprocessing
      
      * Revert "Update Pix2PixZero Auto-correlation Loss"
      
      This reverts commit b218062fed08d6cc164206d6cb852b2b7b00847a.
      
      * Update annotations
      
      * rename `compute_mask` to `generate_mask`
      
      * Update documentation
      
      * Update docs
      
      * Update Docs
      
      * Fix copy
      
      * Change shape of output latents to batch first
      
      * Update docs
      
      * Add first draft for tests
      
      * Bugfix and update tests
      
      * Add `cross_attention_kwargs` support for all pipeline methods
      
      * Fix Copies
      
      * Add support for PIL image latents
      
      Add support for mask broadcasting
      
      Update docs and tests
      
      Align `mask` argument to `mask_image`
      
      Remove height and width arguments
      
      * Enable MPS Tests
      
      * Move example docstrings
      
      * Fix test
      
      * Fix test
      
      * fix pipeline inheritance
      
      * Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline
      
      * Register modules set to `None` in config for `test_save_load_optional_components`
      
      * Move fixed logic to specific test class
      
      * Clean changes to other pipelines
      
      * Update new tests to coordinate with #2953
      
      * Update slow tests for better results
      
      * Safety to avoid potential problems with torch.inference_mode
      
      * Add reference in SD Pipeline Overview
      
      * Fix tests again
      
      * Enforce determinism in noise for generate_mask
      
      * Fix copies
      
      * Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`
      
      * Add LoraLoaderMixin and update `prepare_image_latents`
      
      * clean up repeat and reg
      
      * bugfix
      
      * Remove invalid args from docs
      
      Suppress spurious warning by repeating image before latent to mask gen
      be0bfcec
  2. 27 Apr, 2023 4 commits
  3. 26 Apr, 2023 3 commits
  4. 25 Apr, 2023 2 commits
    • Patrick von Platen's avatar
      add model (#3230) · e51f19ae
      Patrick von Platen authored
      
      
      * add
      
      * clean
      
      * up
      
      * clean up more
      
      * fix more tests
      
      * Improve docs further
      
      * improve
      
      * more fixes docs
      
      * Improve docs more
      
      * Update src/diffusers/models/unet_2d_condition.py
      
      * fix
      
      * up
      
      * update doc links
      
      * make fix-copies
      
      * add safety checker and watermarker to stage 3 doc page code snippets
      
      * speed optimizations docs
      
      * memory optimization docs
      
      * make style
      
      * add watermarking snippets to doc string examples
      
      * make style
      
      * use pt_to_pil helper functions in doc strings
      
      * skip mps tests
      
      * Improve safety
      
      * make style
      
      * new logic
      
      * fix
      
      * fix bad onnx design
      
      * make new stable diffusion upscale pipeline model arguments optional
      
      * define has_nsfw_concept when non-pil output type
      
      * lowercase linked to notebook name
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      e51f19ae
    • Isaac's avatar
      adding enable_vae_tiling and disable_vae_tiling functions (#3225) · e9edbfc2
      Isaac authored
      adding enable_vae_tiling and disable_val_tiling functions
      e9edbfc2
  5. 21 Apr, 2023 2 commits
  6. 20 Apr, 2023 1 commit
  7. 19 Apr, 2023 4 commits
    • superhero-7's avatar
      Modified altdiffusion pipline to support altdiffusion-m18 (#2993) · a4c91be7
      superhero-7 authored
      
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      * Modified altdiffusion pipline to support altdiffusion-m18
      
      ---------
      Co-authored-by: default avatarroot <fulong_ye@163.com>
      a4c91be7
    • hwuebben's avatar
      Update pipeline_stable_diffusion_inpaint_legacy.py (#2903) · 3becd368
      hwuebben authored
      
      
      * Update pipeline_stable_diffusion_inpaint_legacy.py
      
      * fix preprocessing of Pil images with adequate batch size
      
      * revert map
      
      * add tests
      
      * reformat
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * next try to fix the style
      
      * wth is this
      
      * Update testing_utils.py
      
      * Update testing_utils.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3becd368
    • 1lint's avatar
      add from_ckpt method as Mixin (#2318) · 86ecd4b7
      1lint authored
      
      
      * add mixin class for pipeline from original sd ckpt
      
      * Improve
      
      * make style
      
      * merge main into
      
      * Improve more
      
      * fix more
      
      * up
      
      * Apply suggestions from code review
      
      * finish docs
      
      * rename
      
      * make style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      86ecd4b7
    • cmdr2's avatar
      [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model (#2705) · bdeff4d6
      cmdr2 authored
      * [ckpt loader] Allow loading the Inpaint and Img2Img pipelines, while loading a ckpt model
      
      * Address review comment from PR
      
      * PyLint formatting
      
      * Some more pylint fixes, unrelated to our change
      
      * Another pylint fix
      
      * Styling fix
      bdeff4d6
  8. 18 Apr, 2023 2 commits
  9. 17 Apr, 2023 4 commits
  10. 16 Apr, 2023 1 commit
  11. 14 Apr, 2023 2 commits
  12. 13 Apr, 2023 3 commits
  13. 12 Apr, 2023 9 commits
  14. 11 Apr, 2023 2 commits
    • Will Berman's avatar
      Attn added kv processor torch 2.0 block (#3023) · ea39cd7e
      Will Berman authored
      add AttnAddedKVProcessor2_0 block
      ea39cd7e
    • Will Berman's avatar
      Attention processor cross attention norm group norm (#3021) · 98c5e5da
      Will Berman authored
      add group norm type to attention processor cross attention norm
      
      This lets the cross attention norm use both a group norm block and a
      layer norm block.
      
      The group norm operates along the channels dimension
      and requires input shape (batch size, channels, *) where as the layer norm with a single
      `normalized_shape` dimension only operates over the least significant
      dimension i.e. (*, channels).
      
      The channels we want to normalize are the hidden dimension of the encoder hidden states.
      
      By convention, the encoder hidden states are always passed as (batch size, sequence
      length, hidden states).
      
      This means the layer norm can operate on the tensor without modification, but the group
      norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length).
      
      All existing attention processors will have the same logic and we can
      consolidate it in a helper function `prepare_encoder_hidden_states`
      
      prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten
      
      move norm_cross defined check to outside norm_encoder_hidden_states
      
      add missing attn.norm_cross check
      98c5e5da