1. 01 May, 2023 1 commit
    • Patrick von Platen's avatar
      Torch compile graph fix (#3286) · 0e82fb19
      Patrick von Platen authored
      * fix more
      
      * Fix more
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * make sure torch compile
      
      * Clean
      
      * fix test
      0e82fb19
  2. 28 Apr, 2023 2 commits
    • Will Berman's avatar
      temp disable spectogram diffusion tests (#3278) · 384c83aa
      Will Berman authored
      The note-seq package throws an error on import because the default installed version of Ipython
      is not compatible with python 3.8 which we run in the CI.
      https://github.com/huggingface/diffusers/actions/runs/4830121056/jobs/8605954838#step:7:9
      384c83aa
    • clarencechen's avatar
      Diffedit Zero-Shot Inpainting Pipeline (#2837) · be0bfcec
      clarencechen authored
      * Update Pix2PixZero Auto-correlation Loss
      
      * Add Stable Diffusion DiffEdit pipeline
      
      * Add draft documentation and import code
      
      * Bugfixes and refactoring
      
      * Add option to not decode latents in the inversion process
      
      * Harmonize preprocessing
      
      * Revert "Update Pix2PixZero Auto-correlation Loss"
      
      This reverts commit b218062fed08d6cc164206d6cb852b2b7b00847a.
      
      * Update annotations
      
      * rename `compute_mask` to `generate_mask`
      
      * Update documentation
      
      * Update docs
      
      * Update Docs
      
      * Fix copy
      
      * Change shape of output latents to batch first
      
      * Update docs
      
      * Add first draft for tests
      
      * Bugfix and update tests
      
      * Add `cross_attention_kwargs` support for all pipeline methods
      
      * Fix Copies
      
      * Add support for PIL image latents
      
      Add support for mask broadcasting
      
      Update docs and tests
      
      Align `mask` argument to `mask_image`
      
      Remove height and width arguments
      
      * Enable MPS Tests
      
      * Move example docstrings
      
      * Fix test
      
      * Fix test
      
      * fix pipeline inheritance
      
      * Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline
      
      * Register modules set to `None` in config for `test_save_load_optional_components`
      
      * Move fixed logic to specific test class
      
      * Clean changes to other pipelines
      
      * Update new tests to coordinate with #2953
      
      * Update slow tests for better results
      
      * Safety to avoid potential problems with torch.inference_mode
      
      * Add reference in SD Pipeline Overview
      
      * Fix tests again
      
      * Enforce determinism in noise for generate_mask
      
      * Fix copies
      
      * Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`
      
      * Add LoraLoaderMixin and update `prepare_image_latents`
      
      * clean up repeat and reg
      
      * bugfix
      
      * Remove invalid args from docs
      
      Suppress spurious warning by repeating image before latent to mask gen
      be0bfcec
  3. 27 Apr, 2023 3 commits
  4. 26 Apr, 2023 1 commit
  5. 25 Apr, 2023 2 commits
    • Patrick von Platen's avatar
      add model (#3230) · e51f19ae
      Patrick von Platen authored
      
      
      * add
      
      * clean
      
      * up
      
      * clean up more
      
      * fix more tests
      
      * Improve docs further
      
      * improve
      
      * more fixes docs
      
      * Improve docs more
      
      * Update src/diffusers/models/unet_2d_condition.py
      
      * fix
      
      * up
      
      * update doc links
      
      * make fix-copies
      
      * add safety checker and watermarker to stage 3 doc page code snippets
      
      * speed optimizations docs
      
      * memory optimization docs
      
      * make style
      
      * add watermarking snippets to doc string examples
      
      * make style
      
      * use pt_to_pil helper functions in doc strings
      
      * skip mps tests
      
      * Improve safety
      
      * make style
      
      * new logic
      
      * fix
      
      * fix bad onnx design
      
      * make new stable diffusion upscale pipeline model arguments optional
      
      * define has_nsfw_concept when non-pil output type
      
      * lowercase linked to notebook name
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      e51f19ae
    • pdoane's avatar
      Fix issue in maybe_convert_prompt (#3188) · 0d196f9f
      pdoane authored
      When the token used for textual inversion does not have any special symbols (e.g. it is not surrounded by <>), the tokenizer does not properly split the replacement tokens.  Adding a space for the padding tokens fixes this.
      0d196f9f
  6. 22 Apr, 2023 1 commit
  7. 21 Apr, 2023 1 commit
  8. 20 Apr, 2023 3 commits
  9. 19 Apr, 2023 2 commits
    • hwuebben's avatar
      Update pipeline_stable_diffusion_inpaint_legacy.py (#2903) · 3becd368
      hwuebben authored
      
      
      * Update pipeline_stable_diffusion_inpaint_legacy.py
      
      * fix preprocessing of Pil images with adequate batch size
      
      * revert map
      
      * add tests
      
      * reformat
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * next try to fix the style
      
      * wth is this
      
      * Update testing_utils.py
      
      * Update testing_utils.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      * Update test_stable_diffusion_inpaint_legacy.py
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3becd368
    • 1lint's avatar
      add from_ckpt method as Mixin (#2318) · 86ecd4b7
      1lint authored
      
      
      * add mixin class for pipeline from original sd ckpt
      
      * Improve
      
      * make style
      
      * merge main into
      
      * Improve more
      
      * fix more
      
      * up
      
      * Apply suggestions from code review
      
      * finish docs
      
      * rename
      
      * make style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      86ecd4b7
  10. 18 Apr, 2023 1 commit
  11. 17 Apr, 2023 4 commits
  12. 14 Apr, 2023 2 commits
  13. 13 Apr, 2023 1 commit
  14. 12 Apr, 2023 10 commits
  15. 11 Apr, 2023 6 commits
    • Will Berman's avatar
      Attn added kv processor torch 2.0 block (#3023) · ea39cd7e
      Will Berman authored
      add AttnAddedKVProcessor2_0 block
      ea39cd7e
    • Chanchana Sornsoontorn's avatar
      Fix typo and format BasicTransformerBlock attributes (#2953) · 52c4d32d
      Chanchana Sornsoontorn authored
      * ️chore(train_controlnet) fix typo in logger message
      
      * ️chore(models) refactor modules order; make them the same as calling order
      
      When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3
      
      * correct many tests
      
      * remove bogus file
      
      * make style
      
      * correct more tests
      
      * finish tests
      
      * fix one more
      
      * make style
      
      * make unclip deterministic
      
      * 
      
      ️chore(models/attention) reorganize comments in BasicTransformerBlock class
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      52c4d32d
    • Will Berman's avatar
      add only cross attention to simple attention blocks (#3011) · c6180a31
      Will Berman authored
      * add only cross attention to simple attention blocks
      
      * add test for only_cross_attention re: @patrickvonplaten
      
      * mid_block_only_cross_attention better default
      
      allow mid_block_only_cross_attention to default to
      `only_cross_attention` when `only_cross_attention` is given
      as a single boolean
      c6180a31
    • Pedro Cuenca's avatar
      Fix invocation of some slow Flax tests (#3058) · e3095c5f
      Pedro Cuenca authored
      * Fix invocation of some slow tests.
      
      We use __call__ rather than pmapping the generation function ourselves
      because the number of static arguments is different now.
      
      * style
      e3095c5f
    • Will Berman's avatar
      config fixes (#3060) · 80bc0c0c
      Will Berman authored
      80bc0c0c
    • Patrick von Platen's avatar
      Fix config prints and save, load of pipelines (#2849) · 8b451eb6
      Patrick von Platen authored
      * [Config] Fix config prints and save, load
      
      * Only use potential nn.Modules for dtype and device
      
      * Correct vae image processor
      
      * make sure in_channels is not accessed directly
      
      * make sure in channels is only accessed via config
      
      * Make sure schedulers only access config attributes
      
      * Make sure to access config in SAG
      
      * Fix vae processor and make style
      
      * add tests
      
      * uP
      
      * make style
      
      * Fix more naming issues
      
      * Final fix with vae config
      
      * change more
      8b451eb6