1. 15 Sep, 2023 1 commit
  2. 04 Sep, 2023 2 commits
    • dg845's avatar
      Add dropout parameter to UNet2DModel/UNet2DConditionModel (#4882) · 55e17907
      dg845 authored
      * Add dropout param to get_down_block/get_up_block and UNet2DModel/UNet2DConditionModel.
      
      * Add dropout param to Versatile Diffusion modeling, which has a copy of UNet2DConditionModel and its own get_down_block/get_up_block functions.
      55e17907
    • Sayak Paul's avatar
      [Core] LoRA improvements pt. 3 (#4842) · c81a88b2
      Sayak Paul authored
      
      
      * throw warning when more than one lora is attempted to be fused.
      
      * introduce support of lora scale during fusion.
      
      * change test name
      
      * changes
      
      * change to _lora_scale
      
      * lora_scale to call whenever applicable.
      
      * debugging
      
      * lora_scale additional.
      
      * cross_attention_kwargs
      
      * lora_scale -> scale.
      
      * lora_scale fix
      
      * lora_scale in patched projection.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * styling.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * remove unneeded prints.
      
      * remove unneeded prints.
      
      * assign cross_attention_kwargs.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * clean up.
      
      * refactor scale retrieval logic a bit.
      
      * fix nonetypw
      
      * fix: tests
      
      * add more tests
      
      * more fixes.
      
      * figure out a way to pass lora_scale.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * unify the retrieval logic of lora_scale.
      
      * move adjust_lora_scale_text_encoder to lora.py.
      
      * introduce dynamic adjustment lora scale support to sd
      
      * fix up copies
      
      * Empty-Commit
      
      * add: test to check fusion equivalence on different scales.
      
      * handle lora fusion warning.
      
      * make lora smaller
      
      * make lora smaller
      
      * make lora smaller
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      c81a88b2
  3. 01 Sep, 2023 2 commits
    • Dhruv Nair's avatar
      Test Cleanup Precision issues (#4812) · 189e9f01
      Dhruv Nair authored
      
      
      * proposal for flaky tests
      
      * more precision fixes
      
      * move more tests to use cosine distance
      
      * more test fixes
      
      * clean up
      
      * use default attn
      
      * clean up
      
      * update expected value
      
      * make style
      
      * make style
      
      * Apply suggestions from code review
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
      
      * make style
      
      * fix failing tests
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      189e9f01
    • Nguyễn Công Tú Anh's avatar
      Add GLIGEN Text Image implementation (#4777) · 38466c36
      Nguyễn Công Tú Anh authored
      * Add GLIGEN Text Image implementation
      
      * add style transfer from image
      
      * fix check_repository_consistency
      
      * add convert script GLIGEN model to Diffusers
      
      * rename attention type
      
      * fix style code
      
      * remove PositionNetTextImage
      
      * Revert "fix check_repository_consistency"
      
      This reverts commit 15f098c96e00bb9e67b831161615b30a2d28d815.
      
      * change attention type name
      
      * update docs for GLIGEN
      
      * change examples with hf-document-image
      
      * fix style
      
      * add CLIPImageProjection for GLIGEN
      
      * Add new encode_prompt, load project matrix in pipe init
      
      * move CLIPImageProjection to stable_diffusion
      
      * add comment
      38466c36
  4. 29 Aug, 2023 1 commit
    • Chong Mou's avatar
      add models for T2I-Adapter-XL (#4696) · 12358b98
      Chong Mou authored
      
      
      * T2I-Adapter-XL
      
      * update
      
      * update
      
      * add pipeline
      
      * modify pipeline
      
      * modify pipeline
      
      * modify pipeline
      
      * modify pipeline
      
      * modify pipeline
      
      * modify modeling_text_unet
      
      * fix styling.
      
      * fix: copies.
      
      * adapter settings
      
      * new test case
      
      * new test case
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * revert prints.
      
      * new test case
      
      * remove print
      
      * org test case
      
      * add test_pipeline
      
      * styling.
      
      * fix copies.
      
      * modify test parameter
      
      * style.
      
      * add adapter-xl doc
      
      * double quotes in docs
      
      * Fix potential type mismatch
      
      * style.
      
      ---------
      Co-authored-by: default avatarsayakpaul <spsayakpaul@gmail.com>
      12358b98
  5. 28 Aug, 2023 1 commit
    • Patrick von Platen's avatar
      [LoRA Attn Processors] Refactor LoRA Attn Processors (#4765) · 766aa50f
      Patrick von Platen authored
      * [LoRA Attn] Refactor LoRA attn
      
      * correct for network alphas
      
      * fix more
      
      * fix more tests
      
      * fix more tests
      
      * Move below
      
      * Finish
      
      * better version
      
      * correct serialization format
      
      * fix
      
      * fix more
      
      * fix more
      
      * fix more
      
      * Apply suggestions from code review
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
      
      * deprecation
      
      * relax atol for slow test slighly
      
      * Finish tests
      
      * make style
      
      * make style
      766aa50f
  6. 16 Aug, 2023 1 commit
    • nikhil-masterful's avatar
      Add GLIGEN implementation (#4441) · da5ab51d
      nikhil-masterful authored
      * Add GLIGEN implementation
      
      * GLIGEN: Fix code quality check failures
      
      * GLIGEN: Fix Import block un-sorted or un-formatted failures
      
      * GLIGEN: Fix check_repository_consistency failures
      
      * GLIGEN: Add 'PositionNet' to versatile_diffusion/modeling_text_unet.py
      
      * GLIGEN: check_repository_consistency: fix 'copy does not match' error
      
      * GLIGEN: Fix review comments (1)
      
      * GLIGEN: Fix E721 Do not compare types, use `isinstance()` failures
      
      * GLIGEN : Ensure _encode_prompt() copy matches to StableDiffusionPipeline
      
      * GLIGEN: Fix ruff E721 failure in unidiffuser/test_unidiffuser.py
      
      * GLIGEN: doc_builder: restyle pipeline_stable_diffusion_gligen.py
      
      * GIGLEN: reset files unrelated to gligen
      
      * GLIGEN: Fix documentation comments (1)
      
      * GLIGEN: Fix review comments (2)
      
      * GLIGEN: Added FastTest
      
      * GLIGEN: Fix review comments (3)
      da5ab51d
  7. 04 Aug, 2023 1 commit
  8. 25 Jul, 2023 1 commit
  9. 17 Jul, 2023 1 commit
    • Will Berman's avatar
      t2i pipeline (#3932) · a0597f33
      Will Berman authored
      
      
      * Quick implementation of t2i-adapter
      
      Load adapter module with from_pretrained
      
      Prototyping generalized adapter framework
      
      Writeup doc string for sideload framework(WIP) + some minor update on implementation
      
      Update adapter models
      
      Remove old adapter optional args in UNet
      
      Add StableDiffusionAdapterPipeline unit test
      
      Handle cpu offload in StableDiffusionAdapterPipeline
      
      Auto correct coding style
      
      Update model repo name to "RzZ/sd-v1-4-adapter-pipeline"
      
      Refactor MultiAdapter to better compatible with config system
      
      Export MultiAdapter
      
      Create pipeline document template from controlnet
      
      Create dummy objects
      
      Supproting new AdapterLight model
      
      Fix StableDiffusionAdapterPipeline common pipeline test
      
      [WIP] Update adapter pipeline document
      
      Handle num_inference_steps in StableDiffusionAdapterPipeline
      
      Update definition of Adapter "channels_in"
      
      Update documents
      
      Apply code style
      
      Fix doc typo and merge error
      
      Update doc string and example
      
      Quality of life improvement
      
      Remove redundant code and file from prototyping
      
      Remove unused pageage
      
      Remove comments
      
      Fix title
      
      Fix typo
      
      Add conditioning scale arg
      
      Bring back old implmentation
      
      Offload sideload
      
      Add supply info on document
      
      Update src/diffusers/models/adapter.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      Update MultiAdapter constructor
      
      Swap out custom checkpoint and update pipeline constructor
      
      Update docment
      
      Apply suggestions from code review
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      Correcting style
      
      Following single-file policy
      
      Update auto size in image preprocess func
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      fix copies
      
      Update adapter pipeline behavior
      
      Add adapter_conditioning_scale doc string
      
      Add the missing doc string
      
      Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Fix few bugs from suggestion
      
      Handle L-mode PIL image as control image
      
      Rename to differentiate adapter resblock
      
      Update src/diffusers/models/adapter.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Fix typo
      
      Update adapter parameter name
      
      Update test case and code style
      
      Fix copies
      
      Fix typo
      
      Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      Update Adapter class name
      
      Add checkpoint converting script
      
      Fix style
      
      Fix-copies
      
      Remove dev script
      
      Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      Updates for parameter rename
      
      Fix convert_adapter
      
      remove main
      
      fix diff
      
      more
      
      refactoring
      
      more
      
      more
      
      small fixes
      
      refactor
      
      tests
      
      more slow tests
      
      more tests
      
      Update docs/source/en/api/pipelines/overview.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      add community contributor to docs
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      Update docs/source/en/api/pipelines/stable_diffusion/adapter.mdx
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      fix
      
      remove from_adapters
      
      license
      
      paper link
      
      docs
      
      more url fixes
      
      more docs
      
      fix
      
      fixes
      
      fix
      
      fix
      
      * fix sample inplace add
      
      * additional_kwargs -> additional_residuals
      
      * move t2i adapter pipeline to own module
      
      * preprocess -> _preprocess_adapter_image
      
      * add TencentArc to license
      
      * fix example code links
      
      * add image converter and fix example doc string
      
      * fix links
      
      * clearer additional residual application
      
      ---------
      Co-authored-by: default avatarHimariO <dsfhe49854@gmail.com>
      a0597f33
  10. 06 Jul, 2023 4 commits
    • Patrick von Platen's avatar
      disable num attenion heads (#3969) · 8bf80fc8
      Patrick von Platen authored
      * disable num attenion heads
      
      * finish
      8bf80fc8
    • YiYi Xu's avatar
      Kandinsky_v22_yiyi (#3936) · 74621567
      YiYi Xu authored
      
      
      * Kandinsky2_2
      
      * fix init kandinsky2_2
      
      * kandinsky2_2 fix inpainting
      
      * rename pipelines: remove decoder + 2_2 -> V22
      
      * Update scheduling_unclip.py
      
      * remove text_encoder and tokenizer arguments from doc string
      
      * add test for text2img
      
      * add tests for text2img & img2img
      
      * fix
      
      * add test for inpaint
      
      * add prior tests
      
      * style
      
      * copies
      
      * add controlnet test
      
      * style
      
      * add a test for controlnet_img2img
      
      * update prior_emb2emb api to accept image_embedding or image
      
      * add a test for prior_emb2emb
      
      * style
      
      * remove try except
      
      * example
      
      * fix
      
      * add doc string examples to all kandinsky pipelines
      
      * style
      
      * update doc
      
      * style
      
      * add a top about 2.2
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * vae -> movq
      
      * vae -> movq
      
      * style
      
      * fix the #copied from
      
      * remove decoder from file name
      
      * update doc: add a section for kandinsky 2.2
      
      * fix
      
      * fix-copies
      
      * add coped from
      
      * add copies from for prior
      
      * add copies from for prior emb2emb
      
      * copy from for img2img
      
      * copied from for inpaint
      
      * more copied from
      
      * more copies from
      
      * more copies
      
      * remove the yiyi comments
      
      * Apply suggestions from code review
      
      * Self-contained example, pipeline order
      
      * Import prior output instead of redefining.
      
      * Style
      
      * Make VQModel compatible with model offload.
      
      * Fix copies
      
      ---------
      Co-authored-by: default avatarShahmatov Arseniy <62886550+cene555@users.noreply.github.com>
      Co-authored-by: default avataryiyixuxu <yixu310@gmail,com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      74621567
    • Patrick von Platen's avatar
      [SD-XL] Add new pipelines (#3859) · bc9a8cef
      Patrick von Platen authored
      
      
      * Add new text encoder
      
      * add transformers depth
      
      * More
      
      * Correct conversion script
      
      * Fix more
      
      * Fix more
      
      * Correct more
      
      * correct text encoder
      
      * Finish all
      
      * proof that in works in run local xl
      
      * clean up
      
      * Get refiner to work
      
      * Add red castle
      
      * Fix batch size
      
      * Improve pipelines more
      
      * Finish text2image tests
      
      * Add img2img test
      
      * Fix more
      
      * fix import
      
      * Fix embeddings for classic models (#3888)
      
      Fix embeddings for classic SD models.
      
      * Allow multiple prompts to be passed to the refiner (#3895)
      
      * finish more
      
      * Apply suggestions from code review
      
      * add watermarker
      
      * Model offload (#3889)
      
      * Model offload.
      
      * Model offload for refiner / img2img
      
      * Hardcode encoder offload on img2img vae encode
      
      Saves some GPU RAM in img2img / refiner tasks so it remains below 8 GB.
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * correct
      
      * fix
      
      * clean print
      
      * Update install warning for `invisible-watermark`
      
      * add: missing docstrings.
      
      * fix and simplify the usage example in img2img.
      
      * fix setup for watermarking.
      
      * Revert "fix setup for watermarking."
      
      This reverts commit 491bc9f5a640bbf46a97a8e52d6eff7e70eb8e4b.
      
      * fix: watermarking setup.
      
      * fix: op.
      
      * run make fix-copies.
      
      * make sure tests pass
      
      * improve convert
      
      * make tests pass
      
      * make tests pass
      
      * better error message
      
      * fiinsh
      
      * finish
      
      * Fix final test
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      bc9a8cef
    • Prathik Rao's avatar
      Make `UNet2DConditionOutput` pickle-able (#3857) · de142611
      Prathik Rao authored
      
      
      * add default to unet output to prevent it from being a required arg
      
      * add unit test
      
      * make style
      
      * adjust unit test
      
      * mark as fast test
      
      * adjust assert statement in test
      
      ---------
      
      Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
      Co-authored-by: default avatarroot <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
      de142611
  11. 30 Jun, 2023 1 commit
    • Steven Liu's avatar
      [docs] Model API (#3562) · 174dcd69
      Steven Liu authored
      * add modelmixin and unets
      
      * remove old model page
      
      * minor fixes
      
      * fix unet2dcondition
      
      * add vqmodel and autoencoderkl
      
      * add rest of models
      
      * fix autoencoderkl path
      
      * fix toctree
      
      * fix toctree again
      
      * apply feedback
      
      * apply feedback
      
      * fix copies
      
      * fix controlnet copy
      
      * fix copies
      174dcd69
  12. 22 Jun, 2023 1 commit
    • Patrick von Platen's avatar
      Correct bad attn naming (#3797) · 88d26946
      Patrick von Platen authored
      
      
      * relax tolerance slightly
      
      * correct incorrect naming
      
      * correct namingc
      
      * correct more
      
      * Apply suggestions from code review
      
      * Fix more
      
      * Correct more
      
      * correct incorrect naming
      
      * Update src/diffusers/models/controlnet.py
      
      * Correct flax
      
      * Correct renaming
      
      * Correct blocks
      
      * Fix more
      
      * Correct more
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * mkae style
      
      * Fix flax
      
      * mkae style
      
      * rename
      
      * rename
      
      * rename attn head dim to attention_head_dim
      
      * correct flax
      
      * make style
      
      * improve
      
      * Correct more
      
      * make style
      
      * fix more
      
      * mkae style
      
      * Update src/diffusers/models/controlnet_flax.py
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      ---------
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      88d26946
  13. 05 Jun, 2023 1 commit
  14. 30 May, 2023 1 commit
  15. 25 May, 2023 1 commit
  16. 22 May, 2023 1 commit
    • Birch-san's avatar
      Support for cross-attention bias / mask (#2634) · 64bf5d33
      Birch-san authored
      
      
      * Cross-attention masks
      
      prefer qualified symbol, fix accidental Optional
      
      prefer qualified symbol in AttentionProcessor
      
      prefer qualified symbol in embeddings.py
      
      qualified symbol in transformed_2d
      
      qualify FloatTensor in unet_2d_blocks
      
      move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).
      
      move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.
      
      regenerate modeling_text_unet.py
      
      remove unused import
      
      unet_2d_condition encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      transformer_2d encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      unet_2d_blocks.py: add parameter name comments
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      revert description. bool-to-bias treatment happens in unet_2d_condition only.
      
      comment parameter names
      
      fix copies, style
      
      * encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D
      
      * encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn
      
      * support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.
      
      * fix mistake made during merge conflict resolution
      
      * regenerate versatile_diffusion
      
      * pass time embedding into checkpointed attention invocation
      
      * always assume encoder_attention_mask is a mask (i.e. not a bias).
      
      * style, fix-copies
      
      * add tests for cross-attention masks
      
      * add test for padding of attention mask
      
      * explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens
      
      * support both masks and biases in Transformer2DModel#forward. document behaviour
      
      * fix-copies
      
      * delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).
      
      * review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.
      
      * remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.
      
      * put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.
      
      * fix-copies
      
      * style
      
      * fix-copies
      
      * put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.
      
      * restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.
      
      * make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.
      
      * fix copies
      64bf5d33
  17. 02 May, 2023 1 commit
  18. 01 May, 2023 1 commit
    • Patrick von Platen's avatar
      Torch compile graph fix (#3286) · 0e82fb19
      Patrick von Platen authored
      * fix more
      
      * Fix more
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * make sure torch compile
      
      * Clean
      
      * fix test
      0e82fb19
  19. 25 Apr, 2023 1 commit
    • Patrick von Platen's avatar
      add model (#3230) · e51f19ae
      Patrick von Platen authored
      
      
      * add
      
      * clean
      
      * up
      
      * clean up more
      
      * fix more tests
      
      * Improve docs further
      
      * improve
      
      * more fixes docs
      
      * Improve docs more
      
      * Update src/diffusers/models/unet_2d_condition.py
      
      * fix
      
      * up
      
      * update doc links
      
      * make fix-copies
      
      * add safety checker and watermarker to stage 3 doc page code snippets
      
      * speed optimizations docs
      
      * memory optimization docs
      
      * make style
      
      * add watermarking snippets to doc string examples
      
      * make style
      
      * use pt_to_pil helper functions in doc strings
      
      * skip mps tests
      
      * Improve safety
      
      * make style
      
      * new logic
      
      * fix
      
      * fix bad onnx design
      
      * make new stable diffusion upscale pipeline model arguments optional
      
      * define has_nsfw_concept when non-pil output type
      
      * lowercase linked to notebook name
      
      ---------
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      e51f19ae
  20. 18 Apr, 2023 2 commits
  21. 17 Apr, 2023 1 commit
  22. 11 Apr, 2023 4 commits
    • Will Berman's avatar
      Attention processor cross attention norm group norm (#3021) · 98c5e5da
      Will Berman authored
      add group norm type to attention processor cross attention norm
      
      This lets the cross attention norm use both a group norm block and a
      layer norm block.
      
      The group norm operates along the channels dimension
      and requires input shape (batch size, channels, *) where as the layer norm with a single
      `normalized_shape` dimension only operates over the least significant
      dimension i.e. (*, channels).
      
      The channels we want to normalize are the hidden dimension of the encoder hidden states.
      
      By convention, the encoder hidden states are always passed as (batch size, sequence
      length, hidden states).
      
      This means the layer norm can operate on the tensor without modification, but the group
      norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length).
      
      All existing attention processors will have the same logic and we can
      consolidate it in a helper function `prepare_encoder_hidden_states`
      
      prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten
      
      move norm_cross defined check to outside norm_encoder_hidden_states
      
      add missing attn.norm_cross check
      98c5e5da
    • Will Berman's avatar
      unet time embedding activation function (#3048) · 2d52e81c
      Will Berman authored
      * unet time embedding activation function
      
      * typo act_fn -> time_embedding_act_fn
      
      * flatten conditional
      2d52e81c
    • Will Berman's avatar
      add only cross attention to simple attention blocks (#3011) · c6180a31
      Will Berman authored
      * add only cross attention to simple attention blocks
      
      * add test for only_cross_attention re: @patrickvonplaten
      
      * mid_block_only_cross_attention better default
      
      allow mid_block_only_cross_attention to default to
      `only_cross_attention` when `only_cross_attention` is given
      as a single boolean
      c6180a31
    • Patrick von Platen's avatar
      Fix config prints and save, load of pipelines (#2849) · 8b451eb6
      Patrick von Platen authored
      * [Config] Fix config prints and save, load
      
      * Only use potential nn.Modules for dtype and device
      
      * Correct vae image processor
      
      * make sure in_channels is not accessed directly
      
      * make sure in channels is only accessed via config
      
      * Make sure schedulers only access config attributes
      
      * Make sure to access config in SAG
      
      * Fix vae processor and make style
      
      * add tests
      
      * uP
      
      * make style
      
      * Fix more naming issues
      
      * Final fix with vae config
      
      * change more
      8b451eb6
  23. 10 Apr, 2023 3 commits
  24. 27 Mar, 2023 1 commit
  25. 23 Mar, 2023 1 commit
    • Sanchit Gandhi's avatar
      Add AudioLDM (#2232) · b94880e5
      Sanchit Gandhi authored
      
      
      * Add AudioLDM
      
      * up
      
      * add vocoder
      
      * start unet
      
      * unconditional unet
      
      * clap, vocoder and vae
      
      * clean-up: conversion scripts
      
      * fix: conversion script token_type_ids
      
      * clean-up: pipeline docstring
      
      * tests: from SD
      
      * clean-up: cpu offload vocoder instead of safety checker
      
      * feat: adapt tests to audioldm
      
      * feat: add docs
      
      * clean-up: amend pipeline docstrings
      
      * clean-up: make style
      
      * clean-up: make fix-copies
      
      * fix: add doc path to toctree
      
      * clean-up: args for conversion script
      
      * clean-up: paths to checkpoints
      
      * fix: use conditional unet
      
      * clean-up: make style
      
      * fix: type hints for UNet
      
      * clean-up: docstring for UNet
      
      * clean-up: make style
      
      * clean-up: remove duplicate in docstring
      
      * clean-up: make style
      
      * clean-up: make fix-copies
      
      * clean-up: move imports to start in code snippet
      
      * fix: pass cross_attention_dim as a list/tuple to unet
      
      * clean-up: make fix-copies
      
      * fix: update checkpoint path
      
      * fix: unet cross_attention_dim in tests
      
      * film embeddings -> class embeddings
      
      * Apply suggestions from code review
      Co-authored-by: default avatarWill Berman <wlbberman@gmail.com>
      
      * fix: unet film embed to use existing args
      
      * fix: unet tests to use existing args
      
      * fix: make style
      
      * fix: transformers import and version in init
      
      * clean-up: make style
      
      * Revert "clean-up: make style"
      
      This reverts commit 5d6d1f8b324f5583e7805dc01e2c86e493660d66.
      
      * clean-up: make style
      
      * clean-up: use pipeline tester mixin tests where poss
      
      * clean-up: skip attn slicing test
      
      * fix: add torch dtype to docs
      
      * fix: remove conversion script out of src
      
      * fix: remove .detach from 1d waveform
      
      * fix: reduce default num inf steps
      
      * fix: swap height/width -> audio_length_in_s
      
      * clean-up: make style
      
      * fix: remove nightly tests
      
      * fix: imports in conversion script
      
      * clean-up: slim-down to two slow tests
      
      * clean-up: slim-down fast tests
      
      * fix: batch consistent tests
      
      * clean-up: make style
      
      * clean-up: remove vae slicing fast test
      
      * clean-up: propagate changes to doc
      
      * fix: increase test tol to 1e-2
      
      * clean-up: finish docs
      
      * clean-up: make style
      
      * feat: vocoder / VAE compatibility check
      
      * feat: possibly expand / cut audio waveform
      
      * fix: pipeline call signature test
      
      * fix: slow tests output len
      
      * clean-up: make style
      
      * make style
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarWilliam Berman <WLBberman@gmail.com>
      b94880e5
  26. 21 Mar, 2023 1 commit
  27. 15 Mar, 2023 1 commit
  28. 14 Mar, 2023 1 commit
  29. 07 Mar, 2023 1 commit