1. 27 Mar, 2023 1 commit
  2. 21 Mar, 2023 1 commit
  3. 15 Mar, 2023 2 commits
  4. 13 Mar, 2023 1 commit
    • Takuma Mori's avatar
      Add support for Multi-ControlNet to StableDiffusionControlNetPipeline (#2627) · d9b8adc4
      Takuma Mori authored
      
      
      * support for List[ControlNetModel] on init()
      
      * Add to support for multiple ControlNetCondition
      
      * rename conditioning_scale to scale
      
      * scaling bugfix
      
      * Manually merge `MultiControlNet` #2621
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * cleanups
      - don't expose ControlNetCondition
      - move scaling to ControlNetModel
      
      * make style error correct
      
      * remove ControlNetCondition to reduce code diff
      
      * refactoring image/cond_scale
      
      * add explain for `images`
      
      * Add docstrings
      
      * all fast-test passed
      
      * Add a slow test
      
      * nit
      
      * Apply suggestions from code review
      
      * small precision fix
      
      * nits
      
      MultiControlNet -> MultiControlNetModel - Matches existing naming a bit
      closer
      
      MultiControlNetModel inherit from model utils class - Don't have to
      re-write fp16 test
      
      Skip tests that save multi controlnet pipeline - Clearer than changing
      test body
      
      Don't auto-batch the number of input images to the number of controlnets.
      We generally like to require the user to pass the expected number of
      inputs. This simplifies the processing code a bit more
      
      Use existing image pre-processing code a bit more. We can rely on the
      existing image pre-processing code and keep the inference loop a bit
      simpler.
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      d9b8adc4
  5. 02 Mar, 2023 1 commit
    • Takuma Mori's avatar
      Add a ControlNet model & pipeline (#2407) · 8dfff7c0
      Takuma Mori authored
      
      
      * add scaffold
      - copied convert_controlnet_to_diffusers.py from
      convert_original_stable_diffusion_to_diffusers.py
      
      * Add support to load ControlNet (WIP)
      - this makes Missking Key error on ControlNetModel
      
      * Update to convert ControlNet without error msg
      - init impl for StableDiffusionControlNetPipeline
      - init impl for ControlNetModel
      
      * cleanup of commented out
      
      * split create_controlnet_diffusers_config()
      from create_unet_diffusers_config()
      
      - add config: hint_channels
      
      * Add input_hint_block, input_zero_conv and
      middle_block_out
      - this makes missing key error on loading model
      
      * add unet_2d_blocks_controlnet.py
      - copied from unet_2d_blocks.py as impl CrossAttnDownBlock2D,DownBlock2D
      - this makes missing key error on loading model
      
      * Add loading for input_hint_block, zero_convs
      and middle_block_out
      
      - this makes no error message on model loading
      
      * Copy from UNet2DConditionalModel except __init__
      
      * Add ultra primitive test for ControlNetModel
      inference
      
      * Support ControlNetModel inference
      - without exceptions
      
      * copy forward() from UNet2DConditionModel
      
      * Impl ControlledUNet2DConditionModel inference
      - test_controlled_unet_inference passed
      
      * Frozen weight & biases for training
      
      * Minimized version of ControlNet/ControlledUnet
      - test_modules_controllnet.py passed
      
      * make style
      
      * Add support model loading for minimized ver
      
      * Remove all previous version files
      
      * from_pretrained and inference test passed
      
      * copied from pipeline_stable_diffusion.py
      except `__init__()`
      
      * Impl pipeline, pixel match test (almost) passed.
      
      * make style
      
      * make fix-copies
      
      * Fix to add import ControlNet blocks
      for `make fix-copies`
      
      * Remove einops dependency
      
      * Support  np.ndarray, PIL.Image for controlnet_hint
      
      * set default config file as lllyasviel's
      
      * Add support grayscale (hw) numpy array
      
      * Add and update docstrings
      
      * add control_net.mdx
      
      * add control_net.mdx to toctree
      
      * Update copyright year
      
      * Fix to add PIL.Image RGB->BGR conversion
      - thanks @Mystfit
      
      * make fix-copies
      
      * add basic fast test for controlnet
      
      * add slow test for controlnet/unet
      
      * Ignore down/up_block len check on ControlNet
      
      * add a copy from test_stable_diffusion.py
      
      * Accept controlnet_hint is None
      
      * merge pipeline_stable_diffusion.py diff
      
      * Update class name to SDControlNetPipeline
      
      * make style
      
      * Baseline fast test almost passed (w long desc)
      
      * still needs investigate.
      
      Following didn't passed descriped in TODO comment:
      - test_stable_diffusion_long_prompt
      - test_stable_diffusion_no_safety_checker
      
      Following didn't passed same as stable_diffusion_pipeline:
      - test_attention_slicing_forward_pass
      - test_inference_batch_single_identical
      - test_xformers_attention_forwardGenerator_pass
      these seems come from calc accuracy.
      
      * Add note comment related vae_scale_factor
      
      * add test_stable_diffusion_controlnet_ddim
      
      * add assertion for vae_scale_factor != 8
      
      * slow test of pipeline almost passed
      Failed: test_stable_diffusion_pipeline_with_model_offloading
      - ImportError: `enable_model_offload` requires `accelerate v0.17.0` or higher
      
      but currently latest version == 0.16.0
      
      * test_stable_diffusion_long_prompt passed
      
      * test_stable_diffusion_no_safety_checker passed
      
      - due to its model size, move to slow test
      
      * remove PoC test files
      
      * fix num_of_image, prompt length issue add add test
      
      * add support List[PIL.Image] for controlnet_hint
      
      * wip
      
      * all slow test passed
      
      * make style
      
      * update for slow test
      
      * RGB(PIL)->BGR(ctrlnet) conversion
      
      * fixes
      
      * remove manual num_images_per_prompt test
      
      * add document
      
      * add `image` argument docstring
      
      * make style
      
      * Add line to correct conversion
      
      * add controlnet_conditioning_scale (aka control_scales
      strength)
      
      * rgb channel ordering by default
      
      * image batching logic
      
      * Add control image descriptions for each checkpoint
      
      * Only save controlnet model in conversion script
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      
      typo
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * add gerated image example
      
      * a depth mask -> a depth map
      
      * rename control_net.mdx to controlnet.mdx
      
      * fix toc title
      
      * add ControlNet abstruct and link
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
      Co-authored-by: default avatardqueue <dbyqin@gmail.com>
      
      * remove controlnet constructor arguments re: @patrickvonplaten
      
      * [integration tests] test canny
      
      * test_canny fixes
      
      * [integration tests] test_depth
      
      * [integration tests] test_hed
      
      * [integration tests] test_mlsd
      
      * add channel order config to controlnet
      
      * [integration tests] test normal
      
      * [integration tests] test_openpose test_scribble
      
      * change height and width to default to conditioning image
      
      * [integration tests] test seg
      
      * style
      
      * test_depth fix
      
      * [integration tests] size fixes
      
      * [integration tests] cpu offloading
      
      * style
      
      * generalize controlnet embedding
      
      * fix conversion script
      
      * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Style adapted to the documentation of pix2pix
      
      * merge main by hand
      
      * style
      
      * [docs] controlling generation doc nits
      
      * correct some things
      
      * add: controlnetmodel to autodoc.
      
      * finish docs
      
      * finish
      
      * finish 2
      
      * correct images
      
      * finish controlnet
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * uP
      
      * upload model
      
      * up
      
      * up
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatardqueue <dbyqin@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8dfff7c0
  6. 01 Mar, 2023 1 commit
  7. 14 Feb, 2023 2 commits
  8. 07 Feb, 2023 1 commit
    • YiYi Xu's avatar
      Stable Diffusion Latent Upscaler (#2059) · 1051ca81
      YiYi Xu authored
      
      
      * Modify UNet2DConditionModel
      
      - allow skipping mid_block
      
      - adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`
      
      - allow user to set dimension for the timestep embedding (`time_embed_dim`)
      
      - the kernel_size for `conv_in` and `conv_out` is now configurable
      
      - add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`
      
      - allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`
      
      - added 2 arguments `attn1_types` and `attn2_types`
      
        * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
      `BasicTransformerBlock` block with 2 cross-attention , otherwise we
      get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
      so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block;  note that I stil kept
      the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks
      
      - the position of downsample layer and upsample layer is now configurable
      
      - in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
      this use case
      
      - if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
      inside cross attention block
      
      add up/down blocks for k-upscaler
      
      modify CrossAttention class
      
      - make the `dropout` layer in `to_out` optional
      
      - `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
      attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d
      
      - `cross_attention_norm` - add an optional layernorm on encoder_hidden_states
      
      - `attention_dropout`: add an optional dropout on attention score
      
      adapt BasicTransformerBlock
      
      - add an ada groupnorm layer  to conditioning attention input with timestep embedding
      
      - allow skipping the FeedForward layer in between the attentions
      
      - replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration
      
      update timestep embedding: add new act_fn  gelu and an optional act_2
      
      modified ResnetBlock2D
      
      - refactored with AdaGroupNorm class (the timestep scale shift normalization)
      
      - add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv
      
      - add option to use input AdaGroupNorm on the input instead of groupnorm
      
      - add options to add a dropout layer after each conv
      
      - allow user to set the bias in conv_shortcut (needed for k-upscaler)
      
      - add gelu
      
      adding conversion script for k-upscaler unet
      
      add pipeline
      
      * fix attention mask
      
      * fix a typo
      
      * fix a bug
      
      * make sure model can be used with GPU
      
      * make pipeline work with fp16
      
      * fix an error in BasicTransfomerBlock
      
      * make style
      
      * fix typo
      
      * some more fixes
      
      * uP
      
      * up
      
      * correct more
      
      * some clean-up
      
      * clean time proj
      
      * up
      
      * uP
      
      * more changes
      
      * remove the upcast_attention=True from unet config
      
      * remove attn1_types, attn2_types etc
      
      * fix
      
      * revert incorrect changes up/down samplers
      
      * make style
      
      * remove outdated files
      
      * Apply suggestions from code review
      
      * attention refactor
      
      * refactor cross attention
      
      * Apply suggestions from code review
      
      * update
      
      * up
      
      * update
      
      * Apply suggestions from code review
      
      * finish
      
      * Update src/diffusers/models/cross_attention.py
      
      * more fixes
      
      * up
      
      * up
      
      * up
      
      * finish
      
      * more corrections of conversion state
      
      * act_2 -> act_2_fn
      
      * remove dropout_after_conv from ResnetBlock2D
      
      * make style
      
      * simplify KAttentionBlock
      
      * add fast test for latent upscaler pipeline
      
      * add slow test
      
      * slow test fp16
      
      * make style
      
      * add doc string for pipeline_stable_diffusion_latent_upscale
      
      * add api doc page for latent upscaler pipeline
      
      * deprecate attention mask
      
      * clean up embeddings
      
      * simplify resnet
      
      * up
      
      * clean up resnet
      
      * up
      
      * correct more
      
      * up
      
      * up
      
      * improve a bit more
      
      * correct more
      
      * more clean-ups
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add docstrings for new unet config
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * # Copied from
      
      * encode the image if not latent
      
      * remove force casting vae to fp32
      
      * fix
      
      * add comments about preconditioning parameters from k-diffusion paper
      
      * attn1_type, attn2_type -> add_self_attention
      
      * clean up get_down_block and get_up_block
      
      * fix
      
      * fixed a typo(?) in ada group norm
      
      * update slice attention processer for cross attention
      
      * update slice
      
      * fix fast test
      
      * update the checkpoint
      
      * finish tests
      
      * fix-copies
      
      * fix-copy for modeling_text_unet.py
      
      * make style
      
      * make style
      
      * fix f-string
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix import
      
      * correct changes
      
      * fix resnet
      
      * make fix-copies
      
      * correct euler scheduler
      
      * add missing #copied from for preprocess
      
      * revert
      
      * fix
      
      * fix copies
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/models/cross_attention.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * clean up conversion script
      
      * KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D
      
      * more
      
      * Update src/diffusers/models/unet_2d_condition.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * remove prepare_extra_step_kwargs
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix a typo in timestep embedding
      
      * remove num_image_per_prompt
      
      * fix fasttest
      
      * make style + fix-copies
      
      * fix
      
      * fix xformer test
      
      * fix style
      
      * doc string
      
      * make style
      
      * fix-copies
      
      * docstring for time_embedding_norm
      
      * make style
      
      * final finishes
      
      * make fix-copies
      
      * fix tests
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu@yis-macbook-pro.lan>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      1051ca81
  9. 27 Jan, 2023 1 commit
  10. 26 Jan, 2023 1 commit
    • Pedro Cuenca's avatar
      Allow `UNet2DModel` to use arbitrary class embeddings (#2080) · 915a5636
      Pedro Cuenca authored
      * Allow `UNet2DModel` to use arbitrary class embeddings.
      
      We can currently use class conditioning in `UNet2DConditionModel`, but
      not in `UNet2DModel`. However, `UNet2DConditionModel` requires text
      conditioning too, which is unrelated to other types of conditioning.
      This commit makes it possible for `UNet2DModel` to be conditioned on
      entities other than timesteps. This is useful for training /
      research purposes. We can currently train models to perform
      unconditional image generation or text-to-image generation, but it's not
      straightforward to train a model to perform class-conditioned image
      generation, if text conditioning is not required.
      
      We could potentiall use `UNet2DConditionModel` for class-conditioning
      without text embeddings by using down/up blocks without
      cross-conditioning. However:
      - The mid block currently requires cross attention.
      - We are required to provide `encoder_hidden_states` to `forward`.
      
      * Style
      
      * Align class conditioning, add docstring for `num_class_embeds`.
      
      * Copy docstring to versatile_diffusion UNetFlatConditionModel
      915a5636
  11. 18 Jan, 2023 1 commit
  12. 30 Dec, 2022 1 commit
  13. 20 Dec, 2022 1 commit
  14. 18 Dec, 2022 1 commit
    • Will Berman's avatar
      kakaobrain unCLIP (#1428) · 2dcf64b7
      Will Berman authored
      
      
      * [wip] attention block updates
      
      * [wip] unCLIP unet decoder and super res
      
      * [wip] unCLIP prior transformer
      
      * [wip] scheduler changes
      
      * [wip] text proj utility class
      
      * [wip] UnCLIPPipeline
      
      * [wip] kakaobrain unCLIP convert script
      
      * [unCLIP pipeline] fixes re: @patrickvonplaten
      
      remove callbacks
      
      move denoising loops into call function
      
      * UNCLIPScheduler re: @patrickvonplaten
      
      Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified
      DDPM scheduler with changes to support karlo
      
      * mask -> attention_mask re: @patrickvonplaten
      
      * [DDPMScheduler] remove leftover change
      
      * [docs] PriorTransformer
      
      * [docs] UNet2DConditionModel and UNet2DModel
      
      * [nit] UNCLIPScheduler -> UnCLIPScheduler
      
      matches existing unclip naming better
      
      * [docs] SchedulingUnCLIP
      
      * [docs] UnCLIPTextProjModel
      
      * refactor
      
      * finish licenses
      
      * rename all to attention_mask and prep in models
      
      * more renaming
      
      * don't expose unused configs
      
      * final renaming fixes
      
      * remove x attn mask when not necessary
      
      * configure kakao script to use new class embedding config
      
      * fix copies
      
      * [tests] UnCLIPScheduler
      
      * finish x attn
      
      * finish
      
      * remove more
      
      * rename condition blocks
      
      * clean more
      
      * Apply suggestions from code review
      
      * up
      
      * fix
      
      * [tests] UnCLIPPipelineFastTests
      
      * remove unused imports
      
      * [tests] UnCLIPPipelineIntegrationTests
      
      * correct
      
      * make style
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2dcf64b7
  15. 07 Dec, 2022 2 commits
  16. 05 Dec, 2022 2 commits
  17. 02 Dec, 2022 3 commits
  18. 24 Nov, 2022 2 commits
    • Anton Lozhkov's avatar
      Support SD2 attention slicing (#1397) · d50e3217
      Anton Lozhkov authored
      * Support SD2 attention slicing
      
      * Support SD2 attention slicing
      
      * Add more copies
      
      * Use attn_num_head_channels in blocks
      
      * fix-copies
      
      * Update tests
      
      * fix imports
      d50e3217
    • Suraj Patil's avatar
      Adapt UNet2D for supre-resolution (#1385) · cecdd8bd
      Suraj Patil authored
      * allow disabling self attention
      
      * add class_embedding
      
      * fix copies
      
      * fix condition
      
      * fix copies
      
      * do_self_attention -> only_cross_attention
      
      * fix copies
      
      * num_classes -> num_class_embeds
      
      * fix default value
      cecdd8bd
  19. 23 Nov, 2022 3 commits
    • Suraj Patil's avatar
      update unet2d (#1376) · f07a16e0
      Suraj Patil authored
      * boom boom
      
      * remove duplicate arg
      
      * add use_linear_proj arg
      
      * fix copies
      
      * style
      
      * add fast tests
      
      * use_linear_proj -> use_linear_projection
      f07a16e0
    • Patrick von Platen's avatar
      [Versatile Diffusion] Add versatile diffusion model (#1283) · 2625fb59
      Patrick von Platen authored
      
      
      * up
      
      * convert dual unet
      
      * revert dual attn
      
      * adapt for vd-official
      
      * test the full pipeline
      
      * mixed inference
      
      * mixed inference for text2img
      
      * add image prompting
      
      * fix clip norm
      
      * split text2img and img2img
      
      * fix format
      
      * refactor text2img
      
      * mega pipeline
      
      * add optimus
      
      * refactor image var
      
      * wip text_unet
      
      * text unet end to end
      
      * update tests
      
      * reshape
      
      * fix image to text
      
      * add some first docs
      
      * dual guided pipeline
      
      * fix token ratio
      
      * propose change
      
      * dual transformer as a native module
      
      * DualTransformer(nn.Module)
      
      * DualTransformer(nn.Module)
      
      * correct unconditional image
      
      * save-load with mega pipeline
      
      * remove image to text
      
      * up
      
      * uP
      
      * fix
      
      * up
      
      * final fix
      
      * remove_unused_weights
      
      * test updates
      
      * save progress
      
      * uP
      
      * fix dual prompts
      
      * some fixes
      
      * finish
      
      * style
      
      * finish renaming
      
      * up
      
      * fix
      
      * fix
      
      * fix
      
      * finish
      Co-authored-by: default avataranton-l <anton@huggingface.co>
      2625fb59
    • Penn's avatar
      Fix using non-square images with UNet2DModel and DDIM/DDPM pipelines (#1289) · 8fd3a743
      Penn authored
      
      
      * fix non square images with UNet2DModel and DDIM/DDPM pipelines
      
      * fix unet_2d `sample_size` docstring
      
      * update pipeline tests for unet uncond
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8fd3a743
  20. 16 Nov, 2022 1 commit
  21. 14 Nov, 2022 1 commit
  22. 02 Nov, 2022 1 commit
    • MatthieuTPHR's avatar
      Up to 2x speedup on GPUs using memory efficient attention (#532) · 98c42134
      MatthieuTPHR authored
      
      
      * 2x speedup using memory efficient attention
      
      * remove einops dependency
      
      * Swap K, M in op instantiation
      
      * Simplify code, remove unnecessary maybe_init call and function, remove unused self.scale parameter
      
      * make xformers a soft dependency
      
      * remove one-liner functions
      
      * change one letter variable to appropriate names
      
      * Remove Env variable dependency, remove MemoryEfficientCrossAttention class and use enable_xformers_memory_efficient_attention method
      
      * Add memory efficient attention toggle to img2img and inpaint pipelines
      
      * Clearer management of xformers' availability
      
      * update optimizations markdown to add info about memory efficient attention
      
      * add benchmarks for TITAN RTX
      
      * More detailed explanation of how the mem eff benchmark were ran
      
      * Removing autocast from optimization markdown
      
      * import_utils: import torch only if is available
      Co-authored-by: default avatarNouamane Tazi <nouamane98@gmail.com>
      98c42134
  23. 25 Oct, 2022 1 commit
  24. 12 Oct, 2022 1 commit
  25. 10 Oct, 2022 1 commit
  26. 05 Oct, 2022 1 commit
  27. 04 Oct, 2022 1 commit
  28. 30 Sep, 2022 2 commits
    • Josh Achiam's avatar
      Allow resolutions that are not multiples of 64 (#505) · a784be2e
      Josh Achiam authored
      
      
      * Allow resolutions that are not multiples of 64
      
      * ran black
      
      * fix bug
      
      * add test
      
      * more explanation
      
      * more comments
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      a784be2e
    • Nouamane Tazi's avatar
      Optimize Stable Diffusion (#371) · 9ebaea54
      Nouamane Tazi authored
      * initial commit
      
      * make UNet stream capturable
      
      * try to fix noise_pred value
      
      * remove cuda graph and keep NB
      
      * non blocking unet with PNDMScheduler
      
      * make timesteps np arrays for pndm scheduler
      because lists don't get formatted to tensors in `self.set_format`
      
      * make max async in pndm
      
      * use channel last format in unet
      
      * avoid moving timesteps device in each unet call
      
      * avoid memcpy op in `get_timestep_embedding`
      
      * add `channels_last` kwarg to `DiffusionPipeline.from_pretrained`
      
      * update TODO
      
      * replace `channels_last` kwarg with `memory_format` for more generality
      
      * revert the channels_last changes to leave it for another PR
      
      * remove non_blocking when moving input ids to device
      
      * remove blocking from all .to() operations at beginning of pipeline
      
      * fix merging
      
      * fix merging
      
      * model can run in other precisions without autocast
      
      * attn refactoring
      
      * Revert "attn refactoring"
      
      This reverts commit 0c70c0e189cd2c4d8768274c9fcf5b940ee310fb.
      
      * remove restriction to run conv_norm in fp32
      
      * use `baddbmm` instead of `matmul`for better in attention for better perf
      
      * removing all reshapes to test perf
      
      * Revert "removing all reshapes to test perf"
      
      This reverts commit 006ccb8a8c6bc7eb7e512392e692a29d9b1553cd.
      
      * add shapes comments
      
      * hardcore whats needed for jitting
      
      * Revert "hardcore whats needed for jitting"
      
      This reverts commit 2fa9c698eae2890ac5f8e367ca80532ecf94df9a.
      
      * Revert "remove restriction to run conv_norm in fp32"
      
      This reverts commit cec592890c32da3d1b78d38b49e4307aedf459b9.
      
      * revert using baddmm in attention's forward
      
      * cleanup comment
      
      * remove restriction to run conv_norm in fp32. no quality loss was noticed
      
      This reverts commit cc9bc1339c998ebe9e7d733f910c6d72d9792213.
      
      * add more optimizations techniques to docs
      
      * Revert "add shapes comments"
      
      This reverts commit 31c58eadb8892f95478cdf05229adf678678c5f4.
      
      * apply suggestions
      
      * make quality
      
      * apply suggestions
      
      * styling
      
      * `scheduler.timesteps` are now arrays so we dont need .to()
      
      * remove useless .type()
      
      * use mean instead of max in `test_stable_diffusion_inpaint_pipeline_k_lms`
      
      * move scheduler timestamps to correct device if tensors
      
      * add device to `set_timesteps` in LMSD scheduler
      
      * `self.scheduler.set_timesteps` now uses device arg for schedulers that accept it
      
      * quick fix
      
      * styling
      
      * remove kwargs from schedulers `set_timesteps`
      
      * revert to using max in K-LMS inpaint pipeline test
      
      * Revert "`self.scheduler.set_timesteps` now uses device arg for schedulers that accept it"
      
      This reverts commit 00d5a51e5c20d8d445c8664407ef29608106d899.
      
      * move timesteps to correct device before loop in SD pipeline
      
      * apply previous fix to other SD pipelines
      
      * UNet now accepts tensor timesteps even on wrong device, to avoid errors
      - it shouldnt affect performance if timesteps are alrdy on correct device
      - it does slow down performance if they're on the wrong device
      
      * fix pipeline when timesteps are arrays with strides
      9ebaea54
  29. 22 Sep, 2022 1 commit
    • Suraj Patil's avatar
      [UNet2DConditionModel] add gradient checkpointing (#461) · e7120bae
      Suraj Patil authored
      * add grad ckpt to downsample blocks
      
      * make it work
      
      * don't pass gradient_checkpointing to upsample block
      
      * add tests for UNet2DConditionModel
      
      * add test_gradient_checkpointing
      
      * add gradient_checkpointing for up and down blocks
      
      * add functions to enable and disable grad ckpt
      
      * remove the forward argument
      
      * better naming
      
      * make supports_gradient_checkpointing private
      e7120bae
  30. 15 Sep, 2022 1 commit
    • Pedro Cuenca's avatar
      UNet Flax with FlaxModelMixin (#502) · d8b0e4f4
      Pedro Cuenca authored
      
      
      * First UNet Flax modeling blocks.
      
      Mimic the structure of the PyTorch files.
      The model classes themselves need work, depending on what we do about
      configuration and initialization.
      
      * Remove FlaxUNet2DConfig class.
      
      * ignore_for_config non-config args.
      
      * Implement `FlaxModelMixin`
      
      * Use new mixins for Flax UNet.
      
      For some reason the configuration is not correctly applied; the
      signature of the `__init__` method does not contain all the parameters
      by the time it's inspected in `extract_init_dict`.
      
      * Import `FlaxUNet2DConditionModel` if flax is available.
      
      * Rm unused method `framework`
      
      * Update src/diffusers/modeling_flax_utils.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * Indicate types in flax.struct.dataclass as pointed out by @mishig25
      Co-authored-by: default avatarMishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
      
      * Fix typo in transformer block.
      
      * make style
      
      * some more changes
      
      * make style
      
      * Add comment
      
      * Update src/diffusers/modeling_flax_utils.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Rm unneeded comment
      
      * Update docstrings
      
      * correct ignore kwargs
      
      * make style
      
      * Update docstring examples
      
      * Make style
      
      * Style: remove empty line.
      
      * Apply style (after upgrading black from pinned version)
      
      * Remove some commented code and unused imports.
      
      * Add init_weights (not yet in use until #513).
      
      * Trickle down deterministic to blocks.
      
      * Rename q, k, v according to the latest PyTorch version.
      
      Note that weights were exported with the old names, so we need to be
      careful.
      
      * Flax UNet docstrings, default props as in PyTorch.
      
      * Fix minor typos in PyTorch docstrings.
      
      * Use FlaxUNet2DConditionOutput as output from UNet.
      
      * make style
      Co-authored-by: default avatarMishig Davaadorj <dmishig@gmail.com>
      Co-authored-by: default avatarMishig Davaadorj <mishig.davaadorj@coloradocollege.edu>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d8b0e4f4