1. 14 Mar, 2024 1 commit
    • M. Tolga Cangöz's avatar
      [`Tests`] Update a deprecated parameter in test files and fix several typos (#7277) · 5d848ec0
      M. Tolga Cangöz authored
      * Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline`
      
      * Fix variable name typo and update comments
      
      * Update deprecated `output_type="numpy"` to "np" in test files
      
      * Discard changes to src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py
      
      * Update test_stable_diffusion_panorama.py
      
      * Update numbers in README.md
      
      * Update get_guidance_scale_embedding method to use timesteps instead of w
      
      * Update number of checkpoints in README.md
      
      * Add type hints and fix var name
      
      * Fix PyTorch's convention for inplace functions
      
      * Fix a typo
      
      * Revert "Fix PyTorch's convention for inplace functions"
      
      This reverts commit 74350cf65b2c9aa77f08bec7937d7a8b13edb509.
      
      * Fix typos
      
      * Indent
      
      * Refactor get_guidance_scale_embedding method in LEditsPPPipelineStableDiffusionXL class
      5d848ec0
  2. 13 Mar, 2024 2 commits
    • Sayak Paul's avatar
      [LoRA] use the PyTorch classes wherever needed and start depcrecation cycles (#7204) · 531e7191
      Sayak Paul authored
      * fix PyTorch classes and start deprecsation cycles.
      
      * remove args crafting for accommodating scale.
      
      * remove scale check in feedforward.
      
      * assert against nn.Linear and not CompatibleLinear.
      
      * remove conv_cls and lineaR_cls.
      
      * remove scale
      
      * 👋
      
       scale.
      
      * fix: unet2dcondition
      
      * fix attention.py
      
      * fix: attention.py again
      
      * fix: unet_2d_blocks.
      
      * fix-copies.
      
      * more fixes.
      
      * fix: resnet.py
      
      * more fixes
      
      * fix i2vgenxl unet.
      
      * depcrecate scale gently.
      
      * fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * quality
      
      * throw warning when scale is passed to the the BasicTransformerBlock class.
      
      * remove scale from signature.
      
      * cross_attention_kwargs, very nice catch by Yiyi
      
      * fix: logger.warn
      
      * make deprecation message clearer.
      
      * address final comments.
      
      * maintain same depcrecation message and also add it to activations.
      
      * address yiyi
      
      * fix copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * more depcrecation
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      531e7191
    • Sayak Paul's avatar
      [Chore] switch to `logger.warning` (#7289) · 4fbd310f
      Sayak Paul authored
      switch to logger.warning
      4fbd310f
  3. 22 Feb, 2024 1 commit
  4. 08 Feb, 2024 1 commit
  5. 29 Jan, 2024 1 commit
  6. 23 Jan, 2024 1 commit
    • Sayak Paul's avatar
      [Big refactor] move unets to `unets` module 🦋 (#6630) · 1f0705ad
      Sayak Paul authored
      * move unets to  module 🦋
      
      * parameterize unet-level import.
      
      * fix flax unet2dcondition model import
      
      * models __init__
      
      * mildly depcrecating models.unet_2d_blocks in favor of models.unets.unet_2d_blocks.
      
      * noqa
      
      * correct depcrecation behaviour
      
      * inherit from the actual classes.
      
      * Empty-Commit
      
      * backwards compatibility for unet_2d.py
      
      * backward compatibility for unet_2d_condition
      
      * bc for unet_1d
      
      * bc for unet_1d_blocks
      1f0705ad
  7. 19 Jan, 2024 1 commit
    • elucida's avatar
      refactor: extract init/forward function in UNet2DConditionModel (#6478) · c5441965
      elucida authored
      * - extract function for stage in UNet2DConditionModel init & forward
      - Add new function get_mid_block() to unet_2d_blocks.py
      
      * add type hint to get_mid_block aligned with get_up_block and get_down_block; rename _set_xxx function
      
      * add type hint and  use keyword arguments
      
      * remove `copy from` in versatile diffusion
      c5441965
  8. 10 Jan, 2024 1 commit
  9. 25 Oct, 2023 1 commit
    • Aryan V S's avatar
      Improve typehints and docs in `diffusers/models` (#5391) · 0c9f174d
      Aryan V S authored
      
      
      * improvement: add typehints and docs to src/diffusers/models/attention_processor.py
      
      * improvement: add typehints and docs to src/diffusers/models/vae.py
      
      * improvement: add missing docs in src/diffusers/models/vq_model.py
      
      * improvement: add typehints and docs to src/diffusers/models/transformer_temporal.py
      
      * improvement: add typehints and docs to src/diffusers/models/t5_film_transformer.py
      
      * improvement: add type hints to src/diffusers/models/unet_1d_blocks.py
      
      * improvement: add missing type hints to src/diffusers/models/unet_2d_blocks.py
      
      * fix: CI error (make fix-copies required)
      
      * fix: CI error (make fix-copies required again)
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      0c9f174d
  10. 24 Oct, 2023 1 commit
  11. 20 Oct, 2023 1 commit
    • Vishnu V Jaddipal's avatar
      Added support to create asymmetrical U-Net structures (#5400) · 8dba1808
      Vishnu V Jaddipal authored
      
      
      * Added args, kwargs to ```U
      
      * Add UNetMidBlock2D as a supported mid block type
      
      * Fix extra init input for UNetMidBlock2D, change allowed types for Mid-block init
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_blocks.py
      
      * Update unet_2d_blocks.py
      
      * Update unet_2d_blocks.py
      
      * Update unet_2d_condition.py
      
      * Update unet_2d_blocks.py
      
      * Updated docstring, increased check strictness
      
      Updated the docstring for ```UNet2DConditionModel``` to include ```reverse_transformer_layers_per_block``` and updated checking for nested list type ```transformer_layers_per_block```
      
      * Add basic shape-check test for asymmetrical unets
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      Removed blank line
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update unet_2d_condition.py
      
      Remove blank space
      
      * Update unet_2d_condition.py
      
      Changed docstring for `mid_block_type`
      
      * Fixed docstring and wrong default value
      
      * Reformat with black
      
      * Reformat with necessary commands
      
      * Add UNetMidBlockFlat to versatile_diffusion/modeling_text_unet.py to ensure consistency
      
      * Removed args, kwargs, use on mid-block type
      
      * Make fix-copies
      
      * Update src/diffusers/models/unet_2d_condition.py
      
      Wrap into single line
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * make fix-copies
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8dba1808
  12. 18 Oct, 2023 2 commits
    • Patrick von Platen's avatar
      make style · e5168588
      Patrick von Platen authored
      e5168588
    • Chi's avatar
      Beautiful Doc string added into the UNetMidBlock2D class. (#5389) · 36a0bacc
      Chi authored
      
      
      * I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      This changes suggest by maintener.
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      Add suggested text
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update unet_2d_blocks.py
      
      I changed the Parameter to Args text.
      
      * Update unet_2d_blocks.py
      
      proper indentation set in this file.
      
      * Update unet_2d_blocks.py
      
      a little bit of change in the act_fun argument line.
      
      * I run the black command to reformat style in the code
      
      * Update unet_2d_blocks.py
      
      similar doc-string add to have in the original diffusion repository.
      
      * Update unet_2d_blocks.py
      
      Added Beutifull doc-string into the UNetMidBlock2D class.
      
      * Update unet_2d_blocks.py
      
      I replaced the definition in this parameter resnet_time_scale_shift and resnet_groups.
      
      * Update unet_2d_blocks.py
      
      I remove additional sentences into the resnet_groups argument.
      
      * Update unet_2d_blocks.py
      
      I replaced my definition with the maintainer definition in the attention_head_dim parameter.
      
      * I am using black package for reformated my file
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      36a0bacc
  13. 11 Oct, 2023 2 commits
    • Patrick von Platen's avatar
      make style · 5313aa62
      Patrick von Platen authored
      5313aa62
    • Chi's avatar
      I Added Doc-String Into The class. (#5293) · ea8364e5
      Chi authored
      
      
      * I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      This changes suggest by maintener.
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/models/unet_2d_blocks.py
      
      Add suggested text
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update unet_2d_blocks.py
      
      I changed the Parameter to Args text.
      
      * Update unet_2d_blocks.py
      
      proper indentation set in this file.
      
      * Update unet_2d_blocks.py
      
      a little bit of change in the act_fun argument line.
      
      * I run the black command to reformat style in the code
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      ea8364e5
  14. 05 Oct, 2023 1 commit
    • Kadir Nar's avatar
      [Core] Add FreeU mechanism (#5164) · 84b82a6c
      Kadir Nar authored
      *  Added Fourier filter function to upsample blocks
      
      * 🔧 Update Fourier_filter for float16 support
      
      *  Added UNetFreeUConfig to UNet model for FreeU adaptation 🛠
      
      ️
      
      * move unet to its original form and add fourier_filter to torch_utils.
      
      * implement freeU enable mechanism
      
      * implement disable mechanism
      
      * resolution index.
      
      * correct resolution idx condition.
      
      * fix copies.
      
      * no need to use resolution_idx in vae.
      
      * spell out the kwargs
      
      * proper config property
      
      * fix attribution setting
      
      * place unet hasattr properly.
      
      * fix: attribute access.
      
      * proper disable
      
      * remove validation method.
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * potential fix.
      
      * add: doc.
      
      * fix copies
      
      * add: tests.
      
      * add: support freeU in SDXL.
      
      * set default value of resolution idx.
      
      * set default values for resolution_idx.
      
      * fix copies
      
      * fix rest.
      
      * fix copies
      
      * address PR comments.
      
      * run fix-copies
      
      * move apply_free_u to utils and other minors.
      
      * introduce support for video (unet3D)
      
      * minor ups
      
      * consistent fix-copies.
      
      * consistent stuff
      
      * fix-copies
      
      * add: rest
      
      * add: docs.
      
      * fix: tests
      
      * fix: doc path
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * style up
      
      * move to techniques.
      
      * add: slow test for sd freeu.
      
      * add: slow test for sd freeu.
      
      * add: slow test for sd freeu.
      
      * add: slow test for sd freeu.
      
      * add: slow test for sd freeu.
      
      * add: slow test for sd freeu.
      
      * add: slow test for video with freeu
      
      * add: slow test for video with freeu
      
      * add: slow test for video with freeu
      
      * style
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      84b82a6c
  15. 15 Sep, 2023 1 commit
    • dg845's avatar
      Fix Consistency Models UNet2DMidBlock2D Attention GroupNorm Bug (#4863) · 4c8a05f1
      dg845 authored
      
      
      * Add attn_groups argument to UNet2DMidBlock2D to control theinternal Attention block's GroupNorm.
      
      * Add docstring for attn_norm_num_groups in UNet2DModel.
      
      * Since the test UNet config uses resnet_time_scale_shift == 'scale_shift', also set attn_norm_num_groups to 32.
      
      * Add test for attn_norm_num_groups to UNet2DModelTests.
      
      * Fix expected slices for slow tests.
      
      * Also fix tolerances for slow tests.
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      4c8a05f1
  16. 04 Sep, 2023 2 commits
    • dg845's avatar
      Add dropout parameter to UNet2DModel/UNet2DConditionModel (#4882) · 55e17907
      dg845 authored
      * Add dropout param to get_down_block/get_up_block and UNet2DModel/UNet2DConditionModel.
      
      * Add dropout param to Versatile Diffusion modeling, which has a copy of UNet2DConditionModel and its own get_down_block/get_up_block functions.
      55e17907
    • Sayak Paul's avatar
      [Core] LoRA improvements pt. 3 (#4842) · c81a88b2
      Sayak Paul authored
      
      
      * throw warning when more than one lora is attempted to be fused.
      
      * introduce support of lora scale during fusion.
      
      * change test name
      
      * changes
      
      * change to _lora_scale
      
      * lora_scale to call whenever applicable.
      
      * debugging
      
      * lora_scale additional.
      
      * cross_attention_kwargs
      
      * lora_scale -> scale.
      
      * lora_scale fix
      
      * lora_scale in patched projection.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * styling.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * remove unneeded prints.
      
      * remove unneeded prints.
      
      * assign cross_attention_kwargs.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * clean up.
      
      * refactor scale retrieval logic a bit.
      
      * fix nonetypw
      
      * fix: tests
      
      * add more tests
      
      * more fixes.
      
      * figure out a way to pass lora_scale.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * unify the retrieval logic of lora_scale.
      
      * move adjust_lora_scale_text_encoder to lora.py.
      
      * introduce dynamic adjustment lora scale support to sd
      
      * fix up copies
      
      * Empty-Commit
      
      * add: test to check fusion equivalence on different scales.
      
      * handle lora fusion warning.
      
      * make lora smaller
      
      * make lora smaller
      
      * make lora smaller
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      c81a88b2
  17. 16 Aug, 2023 1 commit
    • nikhil-masterful's avatar
      Add GLIGEN implementation (#4441) · da5ab51d
      nikhil-masterful authored
      * Add GLIGEN implementation
      
      * GLIGEN: Fix code quality check failures
      
      * GLIGEN: Fix Import block un-sorted or un-formatted failures
      
      * GLIGEN: Fix check_repository_consistency failures
      
      * GLIGEN: Add 'PositionNet' to versatile_diffusion/modeling_text_unet.py
      
      * GLIGEN: check_repository_consistency: fix 'copy does not match' error
      
      * GLIGEN: Fix review comments (1)
      
      * GLIGEN: Fix E721 Do not compare types, use `isinstance()` failures
      
      * GLIGEN : Ensure _encode_prompt() copy matches to StableDiffusionPipeline
      
      * GLIGEN: Fix ruff E721 failure in unidiffuser/test_unidiffuser.py
      
      * GLIGEN: doc_builder: restyle pipeline_stable_diffusion_gligen.py
      
      * GIGLEN: reset files unrelated to gligen
      
      * GLIGEN: Fix documentation comments (1)
      
      * GLIGEN: Fix review comments (2)
      
      * GLIGEN: Added FastTest
      
      * GLIGEN: Fix review comments (3)
      da5ab51d
  18. 07 Aug, 2023 1 commit
  19. 04 Aug, 2023 1 commit
  20. 02 Aug, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] add tiny Autoencoder for (almost) instant decoding (#4384) · 18fc40c1
      Sayak Paul authored
      
      
      * add: model implementation of tiny autoencoder.
      
      * add: inits.
      
      * push the latest devs.
      
      * add: conversion script and finish.
      
      * add: scaling factor args.
      
      * debugging
      
      * fix denormalization.
      
      * fix: positional argument.
      
      * handle use_torch_2_0_or_xformers.
      
      * handle post_quant_conv
      
      * handle dtype
      
      * fix: sdxl image processor for tiny ae.
      
      * fix: sdxl image processor for tiny ae.
      
      * unify upcasting logic.
      
      * copied from madness.
      
      * remove trailing whitespace.
      
      * set is_tiny_vae = False
      
      * address PR comments.
      
      * change to AutoencoderTiny
      
      * make act_fn an str throughout
      
      * fix: apply_forward_hook decorator call
      
      * get rid of the special is_tiny_vae flag.
      
      * directly scale the output.
      
      * fix dummies?
      
      * fix: act_fn.
      
      * get rid of the Clamp() layer.
      
      * bring back copied from.
      
      * movement of the blocks to appropriate modules.
      
      * add: docstrings to AutoencoderTiny
      
      * add: documentation.
      
      * changes to the conversion script.
      
      * add doc entry.
      
      * settle tests.
      
      * style
      
      * add one slow test.
      
      * fix
      
      * fix 2
      
      * fix 2
      
      * fix: 4
      
      * fix: 5
      
      * finish integration tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * style
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      18fc40c1
  21. 17 Jul, 2023 1 commit
    • Will Berman's avatar
      t2i pipeline (#3932) · a0597f33
      Will Berman authored
      
      
      * Quick implementation of t2i-adapter
      
      Load adapter module with from_pretrained
      
      Prototyping generalized adapter framework
      
      Writeup doc string for sideload framework(WIP) + some minor update on implementation
      
      Update adapter models
      
      Remove old adapter optional args in UNet
      
      Add StableDiffusionAdapterPipeline unit test
      
      Handle cpu offload in StableDiffusionAdapterPipeline
      
      Auto correct coding style
      
      Update model repo name to "RzZ/sd-v1-4-adapter-pipeline"
      
      Refactor MultiAdapter to better compatible with config system
      
      Export MultiAdapter
      
      Create pipeline document template from controlnet
      
      Create dummy objects
      
      Supproting new AdapterLight model
      
      Fix StableDiffusionAdapterPipeline common pipeline test
      
      [WIP] Update adapter pipeline document
      
      Handle num_inference_steps in StableDiffusionAdapterPipeline
      
      Update definition of Adapter "channels_in"
      
      Update documents
      
      Apply code style
      
      Fix doc typo and merge error
      
      Update doc string and example
      
      Quality of life improvement
      
      Remove redundant code and file from prototyping
      
      Remove unused pageage
      
      Remove comments
      
      Fix title
      
      Fix typo
      
      Add conditioning scale arg
      
      Bring back old implmentation
      
      Offload sideload
      
      Add supply info on document
      
      Update src/diffusers/models/adapter.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      Update MultiAdapter constructor
      
      Swap out custom checkpoint and update pipeline constructor
      
      Update docment
      
      Apply suggestions from code review
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      Correcting style
      
      Following single-file policy
      
      Update auto size in image preprocess func
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      fix copies
      
      Update adapter pipeline behavior
      
      Add adapter_conditioning_scale doc string
      
      Add the missing doc string
      
      Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Fix few bugs from suggestion
      
      Handle L-mode PIL image as control image
      
      Rename to differentiate adapter resblock
      
      Update src/diffusers/models/adapter.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Fix typo
      
      Update adapter parameter name
      
      Update test case and code style
      
      Fix copies
      
      Fix typo
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      Update Adapter class name
      
      Add checkpoint converting script
      
      Fix style
      
      Fix-copies
      
      Remove dev script
      
      Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Updates for parameter rename
      
      Fix convert_adapter
      
      remove main
      
      fix diff
      
      more
      
      refactoring
      
      more
      
      more
      
      small fixes
      
      refactor
      
      tests
      
      more slow tests
      
      more tests
      
      Update docs/source/en/api/pipelines/overview.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      add community contributor to docs
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      fix
      
      remove from_adapters
      
      license
      
      paper link
      
      docs
      
      more url fixes
      
      more docs
      
      fix
      
      fixes
      
      fix
      
      fix
      
      * fix sample inplace add
      
      * additional_kwargs -> additional_residuals
      
      * move t2i adapter pipeline to own module
      
      * preprocess -> _preprocess_adapter_image
      
      * add TencentArc to license
      
      * fix example code links
      
      * add image converter and fix example doc string
      
      * fix links
      
      * clearer additional residual application
      
      ---------
      Co-authored-by: default avatarHimariO <dsfhe49854@gmail.com>
      a0597f33
  22. 06 Jul, 2023 1 commit
    • Patrick von Platen's avatar
      [SD-XL] Add new pipelines (#3859) · bc9a8cef
      Patrick von Platen authored
      
      
      * Add new text encoder
      
      * add transformers depth
      
      * More
      
      * Correct conversion script
      
      * Fix more
      
      * Fix more
      
      * Correct more
      
      * correct text encoder
      
      * Finish all
      
      * proof that in works in run local xl
      
      * clean up
      
      * Get refiner to work
      
      * Add red castle
      
      * Fix batch size
      
      * Improve pipelines more
      
      * Finish text2image tests
      
      * Add img2img test
      
      * Fix more
      
      * fix import
      
      * Fix embeddings for classic models (#3888)
      
      Fix embeddings for classic SD models.
      
      * Allow multiple prompts to be passed to the refiner (#3895)
      
      * finish more
      
      * Apply suggestions from code review
      
      * add watermarker
      
      * Model offload (#3889)
      
      * Model offload.
      
      * Model offload for refiner / img2img
      
      * Hardcode encoder offload on img2img vae encode
      
      Saves some GPU RAM in img2img / refiner tasks so it remains below 8 GB.
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * correct
      
      * fix
      
      * clean print
      
      * Update install warning for `invisible-watermark`
      
      * add: missing docstrings.
      
      * fix and simplify the usage example in img2img.
      
      * fix setup for watermarking.
      
      * Revert "fix setup for watermarking."
      
      This reverts commit 491bc9f5a640bbf46a97a8e52d6eff7e70eb8e4b.
      
      * fix: watermarking setup.
      
      * fix: op.
      
      * run make fix-copies.
      
      * make sure tests pass
      
      * improve convert
      
      * make tests pass
      
      * make tests pass
      
      * better error message
      
      * fiinsh
      
      * finish
      
      * Fix final test
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      bc9a8cef
  23. 05 Jul, 2023 1 commit
    • dg845's avatar
      Add Consistency Models Pipeline (#3492) · aed7499a
      dg845 authored
      
      
      * initial commit
      
      * Improve consistency models sampling implementation.
      
      * Add CMStochasticIterativeScheduler, which implements the multi-step sampler (stochastic_iterative_sampler) in the original code, and make further improvements to sampling.
      
      * Add Unet blocks for consistency models
      
      * Add conversion script for Unet
      
      * Fix bug in new unet blocks
      
      * Fix attention weight loading
      
      * Make design improvements to ConsistencyModelPipeline and CMStochasticIterativeScheduler and add initial version of tests.
      
      * make style
      
      * Make small random test UNet class conditional and set resnet_time_scale_shift to 'scale_shift' to better match consistency model checkpoints.
      
      * Add support for converting a test UNet and non-class-conditional UNets to the consistency models conversion script.
      
      * make style
      
      * Change num_class_embeds to 1000 to better match the original consistency models implementation.
      
      * Add support for distillation in pipeline_consistency_models.py.
      
      * Improve consistency model tests:
      	- Get small testing checkpoints from hub
      	- Modify tests to take into account "distillation" parameter of ConsistencyModelPipeline
      	- Add onestep, multistep tests for distillation and distillation + class conditional
      	- Add expected image slices for onestep tests
      
      * make style
      
      * Improve ConsistencyModelPipeline:
      	- Add initial support for class-conditional generation
      	- Fix initial sigma for onestep generation
      	- Fix some sigma shape issues
      
      * make style
      
      * Improve ConsistencyModelPipeline:
      	- add latents __call__ argument and prepare_latents method
      	- add check_inputs method
      	- add initial docstrings for ConsistencyModelPipeline.__call__
      
      * make style
      
      * Fix bug when randomly generating class labels for class-conditional generation.
      
      * Switch CMStochasticIterativeScheduler to configuring a sigma schedule and make related changes to the pipeline and tests.
      
      * Remove some unused code and make style.
      
      * Fix small bug in CMStochasticIterativeScheduler.
      
      * Add expected slices for multistep sampling tests and make them pass.
      
      * Work on consistency model fast tests:
      	- in pipeline, call self.scheduler.scale_model_input before denoising
      	- get expected slices for Euler and Heun scheduler tests
      	- make Euler test pass
      	- mark Heun test as expected fail because it doesn't support prediction_type "sample" yet
      	- remove DPM and Euler Ancestral tests because they don't support use_karras_sigmas
      
      * make style
      
      * Refactor conversion script to make it easier to add more model architectures to convert in the future.
      
      * Work on ConsistencyModelPipeline tests:
      	- Fix device bug when handling class labels in ConsistencyModelPipeline.__call__
      	- Add slow tests for onestep and multistep sampling and make them pass
      	- Refactor fast tests
      	- Refactor ConsistencyModelPipeline.__init__
      
      * make style
      
      * Remove the add_noise and add_noise_to_input methods from CMStochasticIterativeScheduler for now.
      
      * Run python utils/check_copies.py --fix_and_overwrite
      python utils/check_dummies.py --fix_and_overwrite to make dummy objects for new pipeline and scheduler.
      
      * Make fast tests from PipelineTesterMixin pass.
      
      * make style
      
      * Refactor consistency models pipeline and scheduler:
      	- Remove support for Karras schedulers (only support CMStochasticIterativeScheduler)
      	- Move sigma manipulation, input scaling, denoising from pipeline to scheduler
      	- Make corresponding changes to tests and ensure they pass
      
      * make style
      
      * Add docstrings and further refactor pipeline and scheduler.
      
      * make style
      
      * Add initial version of the consistency models documentation.
      
      * Refactor custom timesteps logic following DDPMScheduler/IFPipeline and temporarily add torch 2.0 SDPA kernel selection logic for debugging.
      
      * make style
      
      * Convert current slow tests to use fp16 and flash attention.
      
      * make style
      
      * Add slow tests for normal attention on cuda device.
      
      * make style
      
      * Fix attention weights loading
      
      * Update consistency model fast tests for new test checkpoints with attention fix.
      
      * make style
      
      * apply suggestions
      
      * Add add_noise method to CMStochasticIterativeScheduler (copied from EulerDiscreteScheduler).
      
      * Conversion script now outputs pipeline instead of UNet and add support for LSUN-256 models and different schedulers.
      
      * When both timesteps and num_inference_steps are supplied, raise warning instead of error (timesteps take precedence).
      
      * make style
      
      * Add remaining diffusers model checkpoints for models in the original consistency model release and update usage example.
      
      * apply suggestions from review
      
      * make style
      
      * fix attention naming
      
      * Add tests for CMStochasticIterativeScheduler.
      
      * make style
      
      * Make CMStochasticIterativeScheduler tests pass.
      
      * make style
      
      * Override test_step_shape in CMStochasticIterativeSchedulerTest instead of modifying it in SchedulerCommonTest.
      
      * make style
      
      * rename some models
      
      * Improve API
      
      * rename some models
      
      * Remove duplicated block
      
      * Add docstring and make torch compile work
      
      * More fixes
      
      * Fixes
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * add more docstring
      
      * update consistency conversion script
      
      ---------
      Co-authored-by: default avatarayushmangal <ayushmangal@microsoft.com>
      Co-authored-by: default avatarAyush Mangal <43698245+ayushtues@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      aed7499a
  24. 22 Jun, 2023 1 commit
    • Patrick von Platen's avatar
      Correct bad attn naming (#3797) · 88d26946
      Patrick von Platen authored
      
      
      * relax tolerance slightly
      
      * correct incorrect naming
      
      * correct namingc
      
      * correct more
      
      * Apply suggestions from code review
      
      * Fix more
      
      * Correct more
      
      * correct incorrect naming
      
      * Update src/diffusers/models/controlnet.py
      
      * Correct flax
      
      * Correct renaming
      
      * Correct blocks
      
      * Fix more
      
      * Correct more
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * Fix flax
      
      * mkae style
      
      * rename
      
      * rename
      
      * rename attn head dim to attention_head_dim
      
      * correct flax
      
      * make style
      
      * improve
      
      * Correct more
      
      * make style
      
      * fix more
      
      * mkae style
      
      * Update src/diffusers/models/controlnet_flax.py
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      88d26946
  25. 26 May, 2023 1 commit
  26. 25 May, 2023 1 commit
  27. 22 May, 2023 1 commit
    • Birch-san's avatar
      Support for cross-attention bias / mask (#2634) · 64bf5d33
      Birch-san authored
      
      
      * Cross-attention masks
      
      prefer qualified symbol, fix accidental Optional
      
      prefer qualified symbol in AttentionProcessor
      
      prefer qualified symbol in embeddings.py
      
      qualified symbol in transformed_2d
      
      qualify FloatTensor in unet_2d_blocks
      
      move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).
      
      move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.
      
      regenerate modeling_text_unet.py
      
      remove unused import
      
      unet_2d_condition encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      transformer_2d encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      unet_2d_blocks.py: add parameter name comments
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      revert description. bool-to-bias treatment happens in unet_2d_condition only.
      
      comment parameter names
      
      fix copies, style
      
      * encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D
      
      * encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn
      
      * support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.
      
      * fix mistake made during merge conflict resolution
      
      * regenerate versatile_diffusion
      
      * pass time embedding into checkpointed attention invocation
      
      * always assume encoder_attention_mask is a mask (i.e. not a bias).
      
      * style, fix-copies
      
      * add tests for cross-attention masks
      
      * add test for padding of attention mask
      
      * explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens
      
      * support both masks and biases in Transformer2DModel#forward. document behaviour
      
      * fix-copies
      
      * delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).
      
      * review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.
      
      * remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.
      
      * put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.
      
      * fix-copies
      
      * style
      
      * fix-copies
      
      * put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.
      
      * restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.
      
      * make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.
      
      * fix copies
      64bf5d33
  28. 17 May, 2023 2 commits
  29. 12 May, 2023 1 commit
  30. 05 May, 2023 1 commit
  31. 01 May, 2023 1 commit
    • Patrick von Platen's avatar
      Torch compile graph fix (#3286) · 0e82fb19
      Patrick von Platen authored
      * fix more
      
      * Fix more
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * make sure torch compile
      
      * Clean
      
      * fix test
      0e82fb19
  32. 11 Apr, 2023 3 commits
    • Will Berman's avatar
      Attn added kv processor torch 2.0 block (#3023) · ea39cd7e
      Will Berman authored
      add AttnAddedKVProcessor2_0 block
      ea39cd7e
    • Will Berman's avatar
      Attention processor cross attention norm group norm (#3021) · 98c5e5da
      Will Berman authored
      add group norm type to attention processor cross attention norm
      
      This lets the cross attention norm use both a group norm block and a
      layer norm block.
      
      The group norm operates along the channels dimension
      and requires input shape (batch size, channels, *) where as the layer norm with a single
      `normalized_shape` dimension only operates over the least significant
      dimension i.e. (*, channels).
      
      The channels we want to normalize are the hidden dimension of the encoder hidden states.
      
      By convention, the encoder hidden states are always passed as (batch size, sequence
      length, hidden states).
      
      This means the layer norm can operate on the tensor without modification, but the group
      norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length).
      
      All existing attention processors will have the same logic and we can
      consolidate it in a helper function `prepare_encoder_hidden_states`
      
      prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten
      
      move norm_cross defined check to outside norm_encoder_hidden_states
      
      add missing attn.norm_cross check
      98c5e5da
    • Will Berman's avatar
      add only cross attention to simple attention blocks (#3011) · c6180a31
      Will Berman authored
      * add only cross attention to simple attention blocks
      
      * add test for only_cross_attention re: @patrickvonplaten
      
      * mid_block_only_cross_attention better default
      
      allow mid_block_only_cross_attention to default to
      `only_cross_attention` when `only_cross_attention` is given
      as a single boolean
      c6180a31
  33. 10 Apr, 2023 1 commit