1. 06 Jul, 2023 2 commits
    • Patrick von Platen's avatar
      Improve SD XL (#3968) · 187ea539
      Patrick von Platen authored
      * improve sd xl
      
      * correct more
      
      * finish
      
      * make style
      
      * fix more
      187ea539
    • Patrick von Platen's avatar
      [SD-XL] Add new pipelines (#3859) · bc9a8cef
      Patrick von Platen authored
      
      
      * Add new text encoder
      
      * add transformers depth
      
      * More
      
      * Correct conversion script
      
      * Fix more
      
      * Fix more
      
      * Correct more
      
      * correct text encoder
      
      * Finish all
      
      * proof that in works in run local xl
      
      * clean up
      
      * Get refiner to work
      
      * Add red castle
      
      * Fix batch size
      
      * Improve pipelines more
      
      * Finish text2image tests
      
      * Add img2img test
      
      * Fix more
      
      * fix import
      
      * Fix embeddings for classic models (#3888)
      
      Fix embeddings for classic SD models.
      
      * Allow multiple prompts to be passed to the refiner (#3895)
      
      * finish more
      
      * Apply suggestions from code review
      
      * add watermarker
      
      * Model offload (#3889)
      
      * Model offload.
      
      * Model offload for refiner / img2img
      
      * Hardcode encoder offload on img2img vae encode
      
      Saves some GPU RAM in img2img / refiner tasks so it remains below 8 GB.
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * correct
      
      * fix
      
      * clean print
      
      * Update install warning for `invisible-watermark`
      
      * add: missing docstrings.
      
      * fix and simplify the usage example in img2img.
      
      * fix setup for watermarking.
      
      * Revert "fix setup for watermarking."
      
      This reverts commit 491bc9f5a640bbf46a97a8e52d6eff7e70eb8e4b.
      
      * fix: watermarking setup.
      
      * fix: op.
      
      * run make fix-copies.
      
      * make sure tests pass
      
      * improve convert
      
      * make tests pass
      
      * make tests pass
      
      * better error message
      
      * fiinsh
      
      * finish
      
      * Fix final test
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      bc9a8cef
  2. 20 Jun, 2023 1 commit
  3. 19 Jun, 2023 1 commit
  4. 16 Jun, 2023 1 commit
  5. 15 Jun, 2023 1 commit
  6. 07 Jun, 2023 1 commit
  7. 05 Jun, 2023 1 commit
  8. 16 May, 2023 1 commit
    • Patrick von Platen's avatar
      Refactor controlnet and add img2img and inpaint (#3386) · 886575ee
      Patrick von Platen authored
      * refactor controlnet and add img2img and inpaint
      
      * First draft to get pipelines to work
      
      * make style
      
      * Fix more
      
      * Fix more
      
      * More tests
      
      * Fix more
      
      * Make inpainting work
      
      * make style and more tests
      
      * Apply suggestions from code review
      
      * up
      
      * make style
      
      * Fix imports
      
      * Fix more
      
      * Fix more
      
      * Improve examples
      
      * add test
      
      * Make sure import is correctly deprecated
      
      * Make sure everything works in compile mode
      
      * make sure authorship is correctly attributed
      886575ee
  9. 28 Apr, 2023 1 commit
    • clarencechen's avatar
      Diffedit Zero-Shot Inpainting Pipeline (#2837) · be0bfcec
      clarencechen authored
      * Update Pix2PixZero Auto-correlation Loss
      
      * Add Stable Diffusion DiffEdit pipeline
      
      * Add draft documentation and import code
      
      * Bugfixes and refactoring
      
      * Add option to not decode latents in the inversion process
      
      * Harmonize preprocessing
      
      * Revert "Update Pix2PixZero Auto-correlation Loss"
      
      This reverts commit b218062fed08d6cc164206d6cb852b2b7b00847a.
      
      * Update annotations
      
      * rename `compute_mask` to `generate_mask`
      
      * Update documentation
      
      * Update docs
      
      * Update Docs
      
      * Fix copy
      
      * Change shape of output latents to batch first
      
      * Update docs
      
      * Add first draft for tests
      
      * Bugfix and update tests
      
      * Add `cross_attention_kwargs` support for all pipeline methods
      
      * Fix Copies
      
      * Add support for PIL image latents
      
      Add support for mask broadcasting
      
      Update docs and tests
      
      Align `mask` argument to `mask_image`
      
      Remove height and width arguments
      
      * Enable MPS Tests
      
      * Move example docstrings
      
      * Fix test
      
      * Fix test
      
      * fix pipeline inheritance
      
      * Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline
      
      * Register modules set to `None` in config for `test_save_load_optional_components`
      
      * Move fixed logic to specific test class
      
      * Clean changes to other pipelines
      
      * Update new tests to coordinate with #2953
      
      * Update slow tests for better results
      
      * Safety to avoid potential problems with torch.inference_mode
      
      * Add reference in SD Pipeline Overview
      
      * Fix tests again
      
      * Enforce determinism in noise for generate_mask
      
      * Fix copies
      
      * Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`
      
      * Add LoraLoaderMixin and update `prepare_image_latents`
      
      * clean up repeat and reg
      
      * bugfix
      
      * Remove invalid args from docs
      
      Suppress spurious warning by repeating image before latent to mask gen
      be0bfcec
  10. 25 Apr, 2023 1 commit
  11. 19 Apr, 2023 1 commit
  12. 14 Apr, 2023 1 commit
  13. 12 Apr, 2023 1 commit
  14. 31 Mar, 2023 2 commits
  15. 24 Mar, 2023 1 commit
    • Bahjat Kawar's avatar
      Add ModelEditing pipeline (#2721) · 37a44bb2
      Bahjat Kawar authored
      
      
      * TIME first commit
      
      * styling.
      
      * styling 2.
      
      * fixes; tests
      
      * apply styling and doc fix.
      
      * remove sups.
      
      * fixes
      
      * remove temp file
      
      * move augmentations to const
      
      * added doc entry
      
      * code quality
      
      * customize augmentations
      
      * quality
      
      * quality
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      37a44bb2
  16. 23 Mar, 2023 2 commits
  17. 09 Mar, 2023 1 commit
  18. 03 Mar, 2023 2 commits
  19. 02 Mar, 2023 2 commits
    • Ilmari Heikkinen's avatar
      8k Stable Diffusion with tiled VAE (#1441) · 80148484
      Ilmari Heikkinen authored
      
      
      * Tiled VAE for high-res text2img and img2img
      
      * vae tiling, fix formatting
      
      * enable_vae_tiling API and tests
      
      * tiled vae docs, disable tiling for images that would have only one tile
      
      * tiled vae tests, use channels_last memory format
      
      * tiled vae tests, use smaller test image
      
      * tiled vae tests, remove tiling test from fast tests
      
      * up
      
      * up
      
      * make style
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * make style
      
      * improve naming
      
      * finish
      
      * apply suggestions
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avatarIlmari Heikkinen <ilmari@fhtr.org>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      80148484
    • Takuma Mori's avatar
      Add a ControlNet model & pipeline (#2407) · 8dfff7c0
      Takuma Mori authored
      
      
      * add scaffold
      - copied convert_controlnet_to_diffusers.py from
      convert_original_stable_diffusion_to_diffusers.py
      
      * Add support to load ControlNet (WIP)
      - this makes Missking Key error on ControlNetModel
      
      * Update to convert ControlNet without error msg
      - init impl for StableDiffusionControlNetPipeline
      - init impl for ControlNetModel
      
      * cleanup of commented out
      
      * split create_controlnet_diffusers_config()
      from create_unet_diffusers_config()
      
      - add config: hint_channels
      
      * Add input_hint_block, input_zero_conv and
      middle_block_out
      - this makes missing key error on loading model
      
      * add unet_2d_blocks_controlnet.py
      - copied from unet_2d_blocks.py as impl CrossAttnDownBlock2D,DownBlock2D
      - this makes missing key error on loading model
      
      * Add loading for input_hint_block, zero_convs
      and middle_block_out
      
      - this makes no error message on model loading
      
      * Copy from UNet2DConditionalModel except __init__
      
      * Add ultra primitive test for ControlNetModel
      inference
      
      * Support ControlNetModel inference
      - without exceptions
      
      * copy forward() from UNet2DConditionModel
      
      * Impl ControlledUNet2DConditionModel inference
      - test_controlled_unet_inference passed
      
      * Frozen weight & biases for training
      
      * Minimized version of ControlNet/ControlledUnet
      - test_modules_controllnet.py passed
      
      * make style
      
      * Add support model loading for minimized ver
      
      * Remove all previous version files
      
      * from_pretrained and inference test passed
      
      * copied from pipeline_stable_diffusion.py
      except `__init__()`
      
      * Impl pipeline, pixel match test (almost) passed.
      
      * make style
      
      * make fix-copies
      
      * Fix to add import ControlNet blocks
      for `make fix-copies`
      
      * Remove einops dependency
      
      * Support  np.ndarray, PIL.Image for controlnet_hint
      
      * set default config file as lllyasviel's
      
      * Add support grayscale (hw) numpy array
      
      * Add and update docstrings
      
      * add control_net.mdx
      
      * add control_net.mdx to toctree
      
      * Update copyright year
      
      * Fix to add PIL.Image RGB->BGR conversion
      - thanks @Mystfit
      
      * make fix-copies
      
      * add basic fast test for controlnet
      
      * add slow test for controlnet/unet
      
      * Ignore down/up_block len check on ControlNet
      
      * add a copy from test_stable_diffusion.py
      
      * Accept controlnet_hint is None
      
      * merge pipeline_stable_diffusion.py diff
      
      * Update class name to SDControlNetPipeline
      
      * make style
      
      * Baseline fast test almost passed (w long desc)
      
      * still needs investigate.
      
      Following didn't passed descriped in TODO comment:
      - test_stable_diffusion_long_prompt
      - test_stable_diffusion_no_safety_checker
      
      Following didn't passed same as stable_diffusion_pipeline:
      - test_attention_slicing_forward_pass
      - test_inference_batch_single_identical
      - test_xformers_attention_forwardGenerator_pass
      these seems come from calc accuracy.
      
      * Add note comment related vae_scale_factor
      
      * add test_stable_diffusion_controlnet_ddim
      
      * add assertion for vae_scale_factor != 8
      
      * slow test of pipeline almost passed
      Failed: test_stable_diffusion_pipeline_with_model_offloading
      - ImportError: `enable_model_offload` requires `accelerate v0.17.0` or higher
      
      but currently latest version == 0.16.0
      
      * test_stable_diffusion_long_prompt passed
      
      * test_stable_diffusion_no_safety_checker passed
      
      - due to its model size, move to slow test
      
      * remove PoC test files
      
      * fix num_of_image, prompt length issue add add test
      
      * add support List[PIL.Image] for controlnet_hint
      
      * wip
      
      * all slow test passed
      
      * make style
      
      * update for slow test
      
      * RGB(PIL)->BGR(ctrlnet) conversion
      
      * fixes
      
      * remove manual num_images_per_prompt test
      
      * add document
      
      * add `image` argument docstring
      
      * make style
      
      * Add line to correct conversion
      
      * add controlnet_conditioning_scale (aka control_scales
      strength)
      
      * rgb channel ordering by default
      
      * image batching logic
      
      * Add control image descriptions for each checkpoint
      
      * Only save controlnet model in conversion script
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      
      typo
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * add gerated image example
      
      * a depth mask -> a depth map
      
      * rename control_net.mdx to controlnet.mdx
      
      * fix toc title
      
      * add ControlNet abstruct and link
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      Co-authored-by: default avatardqueue <dbyqin@gmail.com>
      
      * remove controlnet constructor arguments re: @patrickvonplaten
      
      * [integration tests] test canny
      
      * test_canny fixes
      
      * [integration tests] test_depth
      
      * [integration tests] test_hed
      
      * [integration tests] test_mlsd
      
      * add channel order config to controlnet
      
      * [integration tests] test normal
      
      * [integration tests] test_openpose test_scribble
      
      * change height and width to default to conditioning image
      
      * [integration tests] test seg
      
      * style
      
      * test_depth fix
      
      * [integration tests] size fixes
      
      * [integration tests] cpu offloading
      
      * style
      
      * generalize controlnet embedding
      
      * fix conversion script
      
      * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Style adapted to the documentation of pix2pix
      
      * merge main by hand
      
      * style
      
      * [docs] controlling generation doc nits
      
      * correct some things
      
      * add: controlnetmodel to autodoc.
      
      * finish docs
      
      * finish
      
      * finish 2
      
      * correct images
      
      * finish controlnet
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * uP
      
      * upload model
      
      * up
      
      * up
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatardqueue <dbyqin@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8dfff7c0
  20. 01 Mar, 2023 1 commit
  21. 28 Feb, 2023 1 commit
  22. 21 Feb, 2023 1 commit
  23. 20 Feb, 2023 3 commits
  24. 17 Feb, 2023 4 commits
  25. 16 Feb, 2023 3 commits
    • YiYi Xu's avatar
      Attend and excite 2 (#2369) · 2e7a2865
      YiYi Xu authored
      
      
      * attend and excite pipeline
      
      * update
      
      update docstring example
      
      remove visualization
      
      remove the base class attention control
      
      remove dependency on stable diffusion pipeline
      
      always apply gaussian filter with default setting
      
      remove run_standard_sd argument
      
      hardcode attention_res and scale_range (related to step size)
      
      Update docs/source/en/api/pipelines/stable_diffusion/attend_and_excite.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Update tests/pipelines/stable_diffusion_2/test_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      revert test_float16_inference
      
      revert change to the batch related tests
      
      fix test_float16_inference
      
      handle batch
      
      remove the deprecation message
      
      remove None check, step_size
      
      remove debugging logging
      
      add slow test
      
      indices_to_alter -> indices
      
      add check_input
      
      * skip mps
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * indices -> token_indices
      ---------
      Co-authored-by: default avatarevin <evinpinarornek@gmail.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2e7a2865
    • Susung Hong's avatar
      Add Self-Attention-Guided (SAG) Stable Diffusion pipeline (#2193) · fa35750d
      Susung Hong authored
      
      
      * Add Stable Diffusion Sw/ elf-Attention Guidance
      
      * Modify __init__.py
      
      * Register attention storing processor
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Editing default value
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update dummy_torch_and_transformers_objects.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Create test_stable_diffusion_sag.py
      
      * Create self_attention_guidance.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update test_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Rename self_attention_guidance.py to self_attention_guidance.mdx
      
      * Update self_attention_guidance.mdx
      
      * Update self_attention_guidance.mdx
      
      * Update _toctree.yml
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Fixing order
      
      * Update pipeline_stable_diffusion_sag.py
      
      * fixing import order
      
      * fix order
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Naming change
      
      * Noting pred_x0
      
      * Adding some fast tests
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update test_stable_diffusion_sag.py
      
      * Update test_stable_diffusion_sag.py
      
      * Update test_stable_diffusion_sag.py
      
      * Update docs/source/en/api/pipelines/stable_diffusion/self_attention_guidance.mdx
      
      * implement gaussian_blur
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      * fix tests
      
      * Update pipeline_stable_diffusion_sag.py
      
      * Update pipeline_stable_diffusion_sag.py
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      fa35750d
    • Sayak Paul's avatar
      [Pipelines] Adds pix2pix zero (#2334) · fd3d5502
      Sayak Paul authored
      * add: support for BLIP generation.
      
      * add: support for editing synthetic images.
      
      * remove unnecessary comments.
      
      * add inits and run make fix-copies.
      
      * version change of diffusers.
      
      * fix: condition for loading the captioner.
      
      * default conditions_input_image to False.
      
      * guidance_amount -> cross_attention_guidance_amount
      
      * fix inputs to check_inputs()
      
      * fix: attribute.
      
      * fix: prepare_attention_mask() call.
      
      * debugging.
      
      * better placement of references.
      
      * remove torch.no_grad() decorations.
      
      * put torch.no_grad() context before the first denoising loop.
      
      * detach() latents before decoding them.
      
      * put deocding in a torch.no_grad() context.
      
      * add reconstructed image for debugging.
      
      * no_grad(0
      
      * apply formatting.
      
      * address one-off suggestions from the draft PR.
      
      * back to torch.no_grad() and add more elaborate comments.
      
      * refactor prepare_unet() per Patrick's suggestions.
      
      * more elaborate description for .
      
      * formatting.
      
      * add docstrings to the methods specific to pix2pix zero.
      
      * suspecting a redundant noise prediction.
      
      * needed for gradient computation chain.
      
      * less hacks.
      
      * fix: attention mask handling within the processor.
      
      * remove attention reference map computation.
      
      * fix: cross attn args.
      
      * fix: prcoessor.
      
      * store attention maps.
      
      * fix: attention processor.
      
      * update docs and better treatment to xa args.
      
      * update the final noise computation call.
      
      * change xa args call.
      
      * remove xa args option from the pipeline.
      
      * add: docs.
      
      * first test.
      
      * fix: url call.
      
      * fix: argument call.
      
      * remove image conditioning for now.
      
      * 🚨 add: fast tests.
      
      * explicit placement of the xa attn weights.
      
      * add: slow tests 🐢
      
      * fix: tests.
      
      * edited direction embedding should be on the same device as prompt_embeds.
      
      * debugging message.
      
      * debugging.
      
      * add pix2pix zero pipeline for a non-deterministic test.
      
      * debugging/
      
      * remove debugging message.
      
      * make caption generation _
      
      * address comments (part I).
      
      * address PR comments (part II)
      
      * fix: DDPM test assertion.
      
      * refactor doc.
      
      * address PR comments (part III).
      
      * fix: type annotation for the scheduler.
      
      * apply styling.
      
      * skip_mps and add note on embeddings in the docs.
      fd3d5502
  26. 14 Feb, 2023 1 commit
  27. 07 Feb, 2023 1 commit
    • YiYi Xu's avatar
      Stable Diffusion Latent Upscaler (#2059) · 1051ca81
      YiYi Xu authored
      
      
      * Modify UNet2DConditionModel
      
      - allow skipping mid_block
      
      - adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`
      
      - allow user to set dimension for the timestep embedding (`time_embed_dim`)
      
      - the kernel_size for `conv_in` and `conv_out` is now configurable
      
      - add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`
      
      - allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`
      
      - added 2 arguments `attn1_types` and `attn2_types`
      
        * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
      `BasicTransformerBlock` block with 2 cross-attention , otherwise we
      get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
      so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block;  note that I stil kept
      the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks
      
      - the position of downsample layer and upsample layer is now configurable
      
      - in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
      this use case
      
      - if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
      inside cross attention block
      
      add up/down blocks for k-upscaler
      
      modify CrossAttention class
      
      - make the `dropout` layer in `to_out` optional
      
      - `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
      attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d
      
      - `cross_attention_norm` - add an optional layernorm on encoder_hidden_states
      
      - `attention_dropout`: add an optional dropout on attention score
      
      adapt BasicTransformerBlock
      
      - add an ada groupnorm layer  to conditioning attention input with timestep embedding
      
      - allow skipping the FeedForward layer in between the attentions
      
      - replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration
      
      update timestep embedding: add new act_fn  gelu and an optional act_2
      
      modified ResnetBlock2D
      
      - refactored with AdaGroupNorm class (the timestep scale shift normalization)
      
      - add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv
      
      - add option to use input AdaGroupNorm on the input instead of groupnorm
      
      - add options to add a dropout layer after each conv
      
      - allow user to set the bias in conv_shortcut (needed for k-upscaler)
      
      - add gelu
      
      adding conversion script for k-upscaler unet
      
      add pipeline
      
      * fix attention mask
      
      * fix a typo
      
      * fix a bug
      
      * make sure model can be used with GPU
      
      * make pipeline work with fp16
      
      * fix an error in BasicTransfomerBlock
      
      * make style
      
      * fix typo
      
      * some more fixes
      
      * uP
      
      * up
      
      * correct more
      
      * some clean-up
      
      * clean time proj
      
      * up
      
      * uP
      
      * more changes
      
      * remove the upcast_attention=True from unet config
      
      * remove attn1_types, attn2_types etc
      
      * fix
      
      * revert incorrect changes up/down samplers
      
      * make style
      
      * remove outdated files
      
      * Apply suggestions from code review
      
      * attention refactor
      
      * refactor cross attention
      
      * Apply suggestions from code review
      
      * update
      
      * up
      
      * update
      
      * Apply suggestions from code review
      
      * finish
      
      * Update src/diffusers/models/cross_attention.py
      
      * more fixes
      
      * up
      
      * up
      
      * up
      
      * finish
      
      * more corrections of conversion state
      
      * act_2 -> act_2_fn
      
      * remove dropout_after_conv from ResnetBlock2D
      
      * make style
      
      * simplify KAttentionBlock
      
      * add fast test for latent upscaler pipeline
      
      * add slow test
      
      * slow test fp16
      
      * make style
      
      * add doc string for pipeline_stable_diffusion_latent_upscale
      
      * add api doc page for latent upscaler pipeline
      
      * deprecate attention mask
      
      * clean up embeddings
      
      * simplify resnet
      
      * up
      
      * clean up resnet
      
      * up
      
      * correct more
      
      * up
      
      * up
      
      * improve a bit more
      
      * correct more
      
      * more clean-ups
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add docstrings for new unet config
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * # Copied from
      
      * encode the image if not latent
      
      * remove force casting vae to fp32
      
      * fix
      
      * add comments about preconditioning parameters from k-diffusion paper
      
      * attn1_type, attn2_type -> add_self_attention
      
      * clean up get_down_block and get_up_block
      
      * fix
      
      * fixed a typo(?) in ada group norm
      
      * update slice attention processer for cross attention
      
      * update slice
      
      * fix fast test
      
      * update the checkpoint
      
      * finish tests
      
      * fix-copies
      
      * fix-copy for modeling_text_unet.py
      
      * make style
      
      * make style
      
      * fix f-string
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix import
      
      * correct changes
      
      * fix resnet
      
      * make fix-copies
      
      * correct euler scheduler
      
      * add missing #copied from for preprocess
      
      * revert
      
      * fix
      
      * fix copies
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/models/cross_attention.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * clean up conversion script
      
      * KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D
      
      * more
      
      * Update src/diffusers/models/unet_2d_condition.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * remove prepare_extra_step_kwargs
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix a typo in timestep embedding
      
      * remove num_image_per_prompt
      
      * fix fasttest
      
      * make style + fix-copies
      
      * fix
      
      * fix xformer test
      
      * fix style
      
      * doc string
      
      * make style
      
      * fix-copies
      
      * docstring for time_embedding_norm
      
      * make style
      
      * final finishes
      
      * make fix-copies
      
      * fix tests
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu@yis-macbook-pro.lan>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      1051ca81
  28. 25 Jan, 2023 1 commit