1. 14 Mar, 2024 1 commit
    • M. Tolga Cangöz's avatar
      [`Tests`] Update a deprecated parameter in test files and fix several typos (#7277) · 5d848ec0
      M. Tolga Cangöz authored
      * Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline`
      
      * Fix variable name typo and update comments
      
      * Update deprecated `output_type="numpy"` to "np" in test files
      
      * Discard changes to src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py
      
      * Update test_stable_diffusion_panorama.py
      
      * Update numbers in README.md
      
      * Update get_guidance_scale_embedding method to use timesteps instead of w
      
      * Update number of checkpoints in README.md
      
      * Add type hints and fix var name
      
      * Fix PyTorch's convention for inplace functions
      
      * Fix a typo
      
      * Revert "Fix PyTorch's convention for inplace functions"
      
      This reverts commit 74350cf65b2c9aa77f08bec7937d7a8b13edb509.
      
      * Fix typos
      
      * Indent
      
      * Refactor get_guidance_scale_embedding method in LEditsPPPipelineStableDiffusionXL class
      5d848ec0
  2. 13 Mar, 2024 1 commit
    • Sayak Paul's avatar
      [LoRA] use the PyTorch classes wherever needed and start depcrecation cycles (#7204) · 531e7191
      Sayak Paul authored
      * fix PyTorch classes and start deprecsation cycles.
      
      * remove args crafting for accommodating scale.
      
      * remove scale check in feedforward.
      
      * assert against nn.Linear and not CompatibleLinear.
      
      * remove conv_cls and lineaR_cls.
      
      * remove scale
      
      * 👋
      
       scale.
      
      * fix: unet2dcondition
      
      * fix attention.py
      
      * fix: attention.py again
      
      * fix: unet_2d_blocks.
      
      * fix-copies.
      
      * more fixes.
      
      * fix: resnet.py
      
      * more fixes
      
      * fix i2vgenxl unet.
      
      * depcrecate scale gently.
      
      * fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * quality
      
      * throw warning when scale is passed to the the BasicTransformerBlock class.
      
      * remove scale from signature.
      
      * cross_attention_kwargs, very nice catch by Yiyi
      
      * fix: logger.warn
      
      * make deprecation message clearer.
      
      * address final comments.
      
      * maintain same depcrecation message and also add it to activations.
      
      * address yiyi
      
      * fix copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * more depcrecation
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      531e7191
  3. 08 Mar, 2024 1 commit
  4. 07 Mar, 2024 1 commit
  5. 08 Feb, 2024 2 commits
    • Sayak Paul's avatar
      change to 2024 in the license (#6902) · 30e5e81d
      Sayak Paul authored
      change to 2024
      30e5e81d
    • Sayak Paul's avatar
      [I2VGenXL] `attention_head_dim` in the UNet (#6872) · 491a933a
      Sayak Paul authored
      * attention_head_dim
      
      * debug
      
      * print more info
      
      * correct num_attention_heads behaviour
      
      * down_block_num_attention_heads -> num_attention_heads.
      
      * correct the image link in doc.
      
      * add: deprecation for num_attention_head
      
      * fix: test argument to use attention_head_dim
      
      * more fixes.
      
      * quality
      
      * address comments.
      
      * remove depcrecation.
      491a933a
  6. 06 Feb, 2024 2 commits
  7. 04 Feb, 2024 1 commit
  8. 31 Jan, 2024 1 commit
  9. 27 Dec, 2023 1 commit
  10. 21 Dec, 2023 1 commit
    • Will Berman's avatar
      open muse (#5437) · 40398152
      Will Berman authored
      
      
      amused
      
      rename
      
      Update docs/source/en/api/pipelines/amused.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      AdaLayerNormContinuous default values
      
      custom micro conditioning
      
      micro conditioning docs
      
      put lookup from codebook in constructor
      
      fix conversion script
      
      remove manual fused flash attn kernel
      
      add training script
      
      temp remove training script
      
      add dummy gradient checkpointing func
      
      clarify temperatures is an instance variable by setting it
      
      remove additional SkipFF block args
      
      hardcode norm args
      
      rename tests folder
      
      fix paths and samples
      
      fix tests
      
      add training script
      
      training readme
      
      lora saving and loading
      
      non-lora saving/loading
      
      some readme fixes
      
      guards
      
      Update docs/source/en/api/pipelines/amused.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      Update examples/amused/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      Update examples/amused/train_amused.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      vae upcasting
      
      add fp16 integration tests
      
      use tuple for micro cond
      
      copyrights
      
      remove casts
      
      delegate to torch.nn.LayerNorm
      
      move temperature to pipeline call
      
      upsampling/downsampling changes
      40398152
  11. 04 Dec, 2023 1 commit
    • takuoko's avatar
      [Feature] Support IP-Adapter Plus (#5915) · 0a08d419
      takuoko authored
      
      
      * Support IP-Adapter Plus
      
      * fix format
      
      * restore before black format
      
      * restore before black format
      
      * generic
      
      * Refactor PerceiverAttention
      
      * format
      
      * fix test and refactor PerceiverAttention
      
      * generic encode_image
      
      * keep attention implementation
      
      * merge tests
      
      * encode_image backward compatible
      
      * code quality
      
      * fix controlnet inpaint pipeline
      
      * refactor FFN
      
      * refactor FFN
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      0a08d419
  12. 29 Nov, 2023 1 commit
    • Suraj Patil's avatar
      Add SVD (#5895) · 63f767ef
      Suraj Patil authored
      
      
      * begin model
      
      * finish blocks
      
      * add_embedding
      
      * addition_time_embed_dim
      
      * use TimestepEmbedding
      
      * fix temporal res block
      
      * fix time_pos_embed
      
      * fix add_embedding
      
      * add conversion script
      
      * fix model
      
      * up
      
      * add new resnet blocks
      
      * make forward work
      
      * return sample in original shape
      
      * fix temb shape in TemporalResnetBlock
      
      * add spatio temporal transformers
      
      * add vae blocks
      
      * fix blocks
      
      * update
      
      * update
      
      * fix shapes in Alphablender and add time activation in res blcok
      
      * use new blocks
      
      * style
      
      * fix temb shape
      
      * fix SpatioTemporalResBlock
      
      * reuse TemporalBasicTransformerBlock
      
      * fix TemporalBasicTransformerBlock
      
      * use TransformerSpatioTemporalModel
      
      * fix TransformerSpatioTemporalModel
      
      * fix time_context dim
      
      * clean up
      
      * make temb optional
      
      * add blocks
      
      * rename model
      
      * update conversion script
      
      * remove UNetMidBlockSpatioTemporal
      
      * add in init
      
      * remove unused arg
      
      * remove unused arg
      
      * remove more unsed args
      
      * up
      
      * up
      
      * check for None
      
      * update vae
      
      * update up/mid blocks for decoder
      
      * begin pipeline
      
      * adapt scheduler
      
      * add guidance scalings
      
      * fix norm eps in temporal transformers
      
      * add temporal autoencoder
      
      * make pipeline run
      
      * fix frame decodig
      
      * decode in float32
      
      * decode n frames at a time
      
      * pass decoding_t to decode_latents
      
      * fix decode_latents
      
      * vae encode/decode in fp32
      
      * fix dtype in TransformerSpatioTemporalModel
      
      * type image_latents same as image_embeddings
      
      * allow using differnt eps in temporal block for video decoder
      
      * fix default values in vae
      
      * pass num frames in decode
      
      * switch spatial to temporal for mixing in VAE
      
      * fix num frames during split decoding
      
      * cast alpha to sample dtype
      
      * fix attention in MidBlockTemporalDecoder
      
      * fix typo
      
      * fix guidance_scales dtype
      
      * fix missing activation in TemporalDecoder
      
      * skip_post_quant_conv
      
      * add vae conversion
      
      * style
      
      * take guidance scale as input
      
      * up
      
      * allow passing PIL to export_video
      
      * accept fps as arg
      
      * add pipeline and vae in init
      
      * remove hack
      
      * use AutoencoderKLTemporalDecoder
      
      * don't scale image latents
      
      * add unet tests
      
      * clean up unet
      
      * clean TransformerSpatioTemporalModel
      
      * add slow svd test
      
      * clean up
      
      * make temb optional in Decoder mid block
      
      * fix norm eps in TransformerSpatioTemporalModel
      
      * clean up temp decoder
      
      * clean up
      
      * clean up
      
      * use c_noise values for timesteps
      
      * use math for log
      
      * update
      
      * fix copies
      
      * doc
      
      * upcast vae
      
      * update forward pass for gradient checkpointing
      
      * make added_time_ids is tensor
      
      * up
      
      * fix upcasting
      
      * remove post quant conv
      
      * add _resize_with_antialiasing
      
      * fix _compute_padding
      
      * cleanup model
      
      * more cleanup
      
      * more cleanup
      
      * more cleanup
      
      * remove freeu
      
      * remove attn slice
      
      * small clean
      
      * up
      
      * up
      
      * remove extra step kwargs
      
      * remove eta
      
      * remove dropout
      
      * remove callback
      
      * remove merge factor args
      
      * clean
      
      * clean up
      
      * move to dedicated folder
      
      * remove attention_head_dim
      
      * docstr and small fix
      
      * update unet doc strings
      
      * rename decoding_t
      
      * correct linting
      
      * store c_skip and c_out
      
      * cleanup
      
      * clean TemporalResnetBlock
      
      * more cleanup
      
      * clean up vae
      
      * clean up
      
      * begin doc
      
      * more cleanup
      
      * up
      
      * up
      
      * doc
      
      * Improve
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * Apply suggestions from code review
      
      * Default chunk size to None
      
      * add example
      
      * Better
      
      * Apply suggestions from code review
      
      * update doc
      
      * Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * style
      
      * Get torch compile working
      
      * up
      
      * rename
      
      * fix doc
      
      * add chunking
      
      * torch compile
      
      * torch compile
      
      * add modelling outputs
      
      * torch compile
      
      * Improve chunking
      
      * Apply suggestions from code review
      
      * Update docs/source/en/using-diffusers/svd.md
      
      * Close diff tag
      
      * remove slicing
      
      * resnet docstr
      
      * add docstr in resnet
      
      * rename
      
      * Apply suggestions from code review
      
      * update tests
      
      * Fix output type latents
      
      * fix more
      
      * fix more
      
      * Update docs/source/en/using-diffusers/svd.md
      
      * fix more
      
      * add pipeline tests
      
      * remove unused arg
      
      * clean  up
      
      * make sure get_scaling receives tensors
      
      * fix euler scheduler
      
      * fix get_scalings
      
      * simply euler for now
      
      * remove old test file
      
      * use randn_tensor to create noise
      
      * fix device for rand tensor
      
      * increase expected_max_difference
      
      * fix test_inference_batch_single_identical
      
      * actually fix test_inference_batch_single_identical
      
      * disable test_save_load_float16
      
      * skip test_float16_inference
      
      * skip test_inference_batch_single_identical
      
      * fix test_xformers_attention_forwardGenerator_pass
      
      * Apply suggestions from code review
      
      * update StableVideoDiffusionPipelineSlowTests
      
      * update image
      
      * add diffusers example
      
      * fix more
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      63f767ef
  13. 07 Nov, 2023 1 commit
  14. 06 Nov, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] PixArt-Alpha (#5642) · d61889fc
      Sayak Paul authored
      
      
      * init pixart alpha pipeline
      
      * fix: import
      
      * script
      
      * script
      
      * script
      
      * add: vae to the pipeline
      
      * add: vae_scale_factor
      
      * add: checkpoint_path
      
      * clean conversion script a bit.
      
      * size embeddings.
      
      * fix: size embedding
      
      * update scrip
      
      * support for interpolation of position embedding.
      
      * support for conditioning.
      
      * ..
      
      * ..
      
      * ..
      
      * final layer
      
      * final layer
      
      * align if encode_prompt
      
      * support for caption embedding
      
      * refactor
      
      * refactor
      
      * refactor
      
      * start cross attention
      
      * start cross attention
      
      * cross_attention_dim
      
      * cross
      
      * cross
      
      * support for resolution and aspect_ratio
      
      * support for caption projection
      
      * refactor patch embeddings
      
      * batch_size
      
      * up
      
      * commit
      
      * commit
      
      * commit.
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze.
      
      * squeeze.
      
      * fix final block./
      
      * fix final block./
      
      * fix final block./
      
      * clean
      
      * fix: interpolation scale.
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * make --checkpoint_path non-required.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * remove num_tokens
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * debug
      
      * debug
      
      * update conversion script.
      
      * update conversion script.
      
      * update conversion script.
      
      * debug
      
      * debug
      
      * debug
      
      * clean
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * deug
      
      * debug
      
      * debug
      
      * debug
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * clean
      
      * fix
      
      * fix
      
      * boom
      
      * boom
      
      * some changes
      
      * boom
      
      * save
      
      * up
      
      * remove i
      
      * fix more tests
      
      * DPMSolverMultistepScheduler
      
      * fix
      
      * offloading
      
      * fix conversion script
      
      * fix conversion script
      
      * remove print
      
      * remove support for negative prompt embeds.
      
      * typo.
      
      * remove extra kwargs
      
      * bring conversion script to where it was
      
      * fix
      
      * trying mu luck
      
      * trying my luck again
      
      * again
      
      * again
      
      * again
      
      * clean up
      
      * up
      
      * up
      
      * update example
      
      * support for 512
      
      * remove spacing
      
      * finalize docs.
      
      * test debug
      
      * fix: assertion values.
      
      * debug
      
      * debug
      
      * debug
      
      * fix: repeat
      
      * remove prints.
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Correct more
      
      * Apply suggestions from code review
      
      * Change all
      
      * Clean more
      
      * fix more
      
      * Fix more
      
      * Fix more
      
      * Correct more
      
      * address patrick's comments.
      
      * remove unneeded args
      
      * clean up pipeline.
      
      * sty;e
      
      * make the use of additional conditions better conditioned.
      
      * None better
      
      * dtype
      
      * height and width validation
      
      * add a note about size brackets.
      
      * fix
      
      * spit out slow test outputs.
      
      * fix?
      
      * fix optional test
      
      * fix more
      
      * remove unneeded comment
      
      * debug
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d61889fc
  15. 02 Nov, 2023 1 commit
    • Dhruv Nair's avatar
      Animatediff Proposal (#5413) · 2a8cf8e3
      Dhruv Nair authored
      * draft design
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean  up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * update pipeline
      
      * clean up
      
      * clean up
      
      * clean up
      
      * add tests
      
      * change motion block
      
      * clean up
      
      * clean up
      
      * clean up
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * clean up
      
      * update
      
      * update
      
      * update model test
      
      * update
      
      * update
      
      * update
      
      * update
      
      * make style
      
      * update
      
      * fix embeddings
      
      * update
      
      * merge upstream
      
      * max fix copies
      
      * fix bug
      
      * fix mistake
      
      * add docs
      
      * update
      
      * clean up
      
      * update
      
      * clean up
      
      * clean up
      
      * fix docstrings
      
      * fix docstrings
      
      * update
      
      * update
      
      * clean  up
      
      * update
      2a8cf8e3
  16. 24 Oct, 2023 1 commit
  17. 13 Oct, 2023 1 commit
  18. 11 Oct, 2023 1 commit
  19. 11 Sep, 2023 1 commit
    • Dhruv Nair's avatar
      Lazy Import for Diffusers (#4829) · b6e0b016
      Dhruv Nair authored
      
      
      * initial commit
      
      * move modules to import struct
      
      * add dummy objects and _LazyModule
      
      * add lazy import to schedulers
      
      * clean up unused imports
      
      * lazy import on models module
      
      * lazy import for schedulers module
      
      * add lazy import to pipelines module
      
      * lazy import altdiffusion
      
      * lazy import audio diffusion
      
      * lazy import audioldm
      
      * lazy import consistency model
      
      * lazy import controlnet
      
      * lazy import dance diffusion ddim ddpm
      
      * lazy import deepfloyd
      
      * lazy import kandinksy
      
      * lazy imports
      
      * lazy import semantic diffusion
      
      * lazy imports
      
      * lazy import stable diffusion
      
      * move sd output to its own module
      
      * clean up
      
      * lazy import t2iadapter
      
      * lazy import unclip
      
      * lazy import versatile and vq diffsuion
      
      * lazy import vq diffusion
      
      * helper to fetch objects from modules
      
      * lazy import sdxl
      
      * lazy import txt2vid
      
      * lazy import stochastic karras
      
      * fix model imports
      
      * fix bug
      
      * lazy import
      
      * clean up
      
      * clean up
      
      * fixes for tests
      
      * fixes for tests
      
      * clean up
      
      * remove import of torch_utils from utils module
      
      * clean up
      
      * clean up
      
      * fix mistake import statement
      
      * dedicated modules for exporting and loading
      
      * remove testing utils from utils module
      
      * fixes from  merge conflicts
      
      * Update src/diffusers/pipelines/kandinsky2_2/__init__.py
      
      * fix docs
      
      * fix alt diffusion copied from
      
      * fix check dummies
      
      * fix more docs
      
      * remove accelerate import from utils module
      
      * add type checking
      
      * make style
      
      * fix check dummies
      
      * remove torch import from xformers check
      
      * clean up error message
      
      * fixes after upstream merges
      
      * dummy objects fix
      
      * fix tests
      
      * remove unused module import
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      b6e0b016
  20. 04 Sep, 2023 1 commit
    • Sayak Paul's avatar
      [Core] LoRA improvements pt. 3 (#4842) · c81a88b2
      Sayak Paul authored
      
      
      * throw warning when more than one lora is attempted to be fused.
      
      * introduce support of lora scale during fusion.
      
      * change test name
      
      * changes
      
      * change to _lora_scale
      
      * lora_scale to call whenever applicable.
      
      * debugging
      
      * lora_scale additional.
      
      * cross_attention_kwargs
      
      * lora_scale -> scale.
      
      * lora_scale fix
      
      * lora_scale in patched projection.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * styling.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * remove unneeded prints.
      
      * remove unneeded prints.
      
      * assign cross_attention_kwargs.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * clean up.
      
      * refactor scale retrieval logic a bit.
      
      * fix nonetypw
      
      * fix: tests
      
      * add more tests
      
      * more fixes.
      
      * figure out a way to pass lora_scale.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * unify the retrieval logic of lora_scale.
      
      * move adjust_lora_scale_text_encoder to lora.py.
      
      * introduce dynamic adjustment lora scale support to sd
      
      * fix up copies
      
      * Empty-Commit
      
      * add: test to check fusion equivalence on different scales.
      
      * handle lora fusion warning.
      
      * make lora smaller
      
      * make lora smaller
      
      * make lora smaller
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      c81a88b2
  21. 01 Sep, 2023 1 commit
    • Nguyễn Công Tú Anh's avatar
      Add GLIGEN Text Image implementation (#4777) · 38466c36
      Nguyễn Công Tú Anh authored
      * Add GLIGEN Text Image implementation
      
      * add style transfer from image
      
      * fix check_repository_consistency
      
      * add convert script GLIGEN model to Diffusers
      
      * rename attention type
      
      * fix style code
      
      * remove PositionNetTextImage
      
      * Revert "fix check_repository_consistency"
      
      This reverts commit 15f098c96e00bb9e67b831161615b30a2d28d815.
      
      * change attention type name
      
      * update docs for GLIGEN
      
      * change examples with hf-document-image
      
      * fix style
      
      * add CLIPImageProjection for GLIGEN
      
      * Add new encode_prompt, load project matrix in pipe init
      
      * move CLIPImageProjection to stable_diffusion
      
      * add comment
      38466c36
  22. 16 Aug, 2023 1 commit
    • nikhil-masterful's avatar
      Add GLIGEN implementation (#4441) · da5ab51d
      nikhil-masterful authored
      * Add GLIGEN implementation
      
      * GLIGEN: Fix code quality check failures
      
      * GLIGEN: Fix Import block un-sorted or un-formatted failures
      
      * GLIGEN: Fix check_repository_consistency failures
      
      * GLIGEN: Add 'PositionNet' to versatile_diffusion/modeling_text_unet.py
      
      * GLIGEN: check_repository_consistency: fix 'copy does not match' error
      
      * GLIGEN: Fix review comments (1)
      
      * GLIGEN: Fix E721 Do not compare types, use `isinstance()` failures
      
      * GLIGEN : Ensure _encode_prompt() copy matches to StableDiffusionPipeline
      
      * GLIGEN: Fix ruff E721 failure in unidiffuser/test_unidiffuser.py
      
      * GLIGEN: doc_builder: restyle pipeline_stable_diffusion_gligen.py
      
      * GIGLEN: reset files unrelated to gligen
      
      * GLIGEN: Fix documentation comments (1)
      
      * GLIGEN: Fix review comments (2)
      
      * GLIGEN: Added FastTest
      
      * GLIGEN: Fix review comments (3)
      da5ab51d
  23. 25 Jul, 2023 1 commit
  24. 03 Jul, 2023 1 commit
  25. 05 Jun, 2023 1 commit
  26. 22 May, 2023 1 commit
    • Birch-san's avatar
      Support for cross-attention bias / mask (#2634) · 64bf5d33
      Birch-san authored
      
      
      * Cross-attention masks
      
      prefer qualified symbol, fix accidental Optional
      
      prefer qualified symbol in AttentionProcessor
      
      prefer qualified symbol in embeddings.py
      
      qualified symbol in transformed_2d
      
      qualify FloatTensor in unet_2d_blocks
      
      move new transformer_2d params attention_mask, encoder_attention_mask to the end of the section which is assumed (e.g. by functions such as checkpoint()) to have a stable positional param interface. regard return_dict as a special-case which is assumed to be injected separately from positional params (e.g. by create_custom_forward()).
      
      move new encoder_attention_mask param to end of CrossAttn block interfaces and Unet2DCondition interface, to maintain positional param interface.
      
      regenerate modeling_text_unet.py
      
      remove unused import
      
      unet_2d_condition encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      versatile_diffusion/modeling_text_unet.py encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      transformer_2d encoder_attention_mask docs
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      unet_2d_blocks.py: add parameter name comments
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      revert description. bool-to-bias treatment happens in unet_2d_condition only.
      
      comment parameter names
      
      fix copies, style
      
      * encoder_attention_mask for SimpleCrossAttnDownBlock2D, SimpleCrossAttnUpBlock2D
      
      * encoder_attention_mask for UNetMidBlock2DSimpleCrossAttn
      
      * support attention_mask, encoder_attention_mask in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D, KAttentionBlock. fix binding of attention_mask, cross_attention_kwargs params in KCrossAttnDownBlock2D, KCrossAttnUpBlock2D checkpoint invocations.
      
      * fix mistake made during merge conflict resolution
      
      * regenerate versatile_diffusion
      
      * pass time embedding into checkpointed attention invocation
      
      * always assume encoder_attention_mask is a mask (i.e. not a bias).
      
      * style, fix-copies
      
      * add tests for cross-attention masks
      
      * add test for padding of attention mask
      
      * explain mask's query_tokens dim. fix explanation about broadcasting over channels; we actually broadcast over query tokens
      
      * support both masks and biases in Transformer2DModel#forward. document behaviour
      
      * fix-copies
      
      * delete attention_mask docs on the basis I never tested self-attention masking myself. not comfortable explaining it, since I don't actually understand how a self-attn mask can work in its current form: the key length will be different in every ResBlock (we don't downsample the mask when we downsample the image).
      
      * review feedback: the standard Unet blocks shouldn't pass temb to attn (only to resnet). remove from KCrossAttnDownBlock2D,KCrossAttnUpBlock2D#forward.
      
      * remove encoder_attention_mask param from SimpleCrossAttn{Up,Down}Block2D,UNetMidBlock2DSimpleCrossAttn, and mask-choice in those blocks' #forward, on the basis that they only do one type of attention, so the consumer can pass whichever type of attention_mask is appropriate.
      
      * put attention mask padding back to how it was (since the SD use-case it enabled wasn't important, and it breaks the original unclip use-case). disable the test which was added.
      
      * fix-copies
      
      * style
      
      * fix-copies
      
      * put encoder_attention_mask param back into Simple block forward interfaces, to ensure consistency of forward interface.
      
      * restore passing of emb to KAttentionBlock#forward, on the basis that removal caused test failures. restore also the passing of emb to checkpointed calls to KAttentionBlock#forward.
      
      * make simple unet2d blocks use encoder_attention_mask, but only when attention_mask is None. this should fix UnCLIP compatibility.
      
      * fix copies
      64bf5d33
  27. 12 May, 2023 1 commit
  28. 01 May, 2023 1 commit
    • Patrick von Platen's avatar
      Torch compile graph fix (#3286) · 0e82fb19
      Patrick von Platen authored
      * fix more
      
      * Fix more
      
      * fix more
      
      * Apply suggestions from code review
      
      * fix
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * make sure torch compile
      
      * Clean
      
      * fix test
      0e82fb19
  29. 28 Apr, 2023 1 commit
  30. 24 Apr, 2023 1 commit
  31. 22 Apr, 2023 1 commit
  32. 11 Apr, 2023 1 commit
    • Chanchana Sornsoontorn's avatar
      Fix typo and format BasicTransformerBlock attributes (#2953) · 52c4d32d
      Chanchana Sornsoontorn authored
      * ️chore(train_controlnet) fix typo in logger message
      
      * ️chore(models) refactor modules order; make them the same as calling order
      
      When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3
      
      * correct many tests
      
      * remove bogus file
      
      * make style
      
      * correct more tests
      
      * finish tests
      
      * fix one more
      
      * make style
      
      * make unclip deterministic
      
      * 
      
      ️chore(models/attention) reorganize comments in BasicTransformerBlock class
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      52c4d32d
  33. 22 Mar, 2023 1 commit
    • Patrick von Platen's avatar
      [MS Text To Video] Add first text to video (#2738) · ca1a2229
      Patrick von Platen authored
      
      
      * [MS Text To Video} Add first text to video
      
      * upload
      
      * make first model example
      
      * match unet3d params
      
      * make sure weights are correcctly converted
      
      * improve
      
      * forward pass works, but diff result
      
      * make forward work
      
      * fix more
      
      * finish
      
      * refactor video output class.
      
      * feat: add support for a video export utility.
      
      * fix: opencv availability check.
      
      * run make fix-copies.
      
      * add: docs for the model components.
      
      * add: standalone pipeline doc.
      
      * edit docstring of the pipeline.
      
      * add: right path to TransformerTempModel
      
      * add: first set of tests.
      
      * complete fast tests for text to video.
      
      * fix bug
      
      * up
      
      * three fast tests failing.
      
      * add: note on slow tests
      
      * make work with all schedulers
      
      * apply styling.
      
      * add slow tests
      
      * change file name
      
      * update
      
      * more correction
      
      * more fixes
      
      * finish
      
      * up
      
      * Apply suggestions from code review
      
      * up
      
      * finish
      
      * make copies
      
      * fix pipeline tests
      
      * fix more tests
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * apply suggestions
      
      * up
      
      * revert
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      ca1a2229
  34. 21 Mar, 2023 1 commit
  35. 15 Mar, 2023 1 commit
  36. 13 Mar, 2023 1 commit
  37. 01 Mar, 2023 1 commit
  38. 07 Feb, 2023 1 commit
    • YiYi Xu's avatar
      Stable Diffusion Latent Upscaler (#2059) · 1051ca81
      YiYi Xu authored
      
      
      * Modify UNet2DConditionModel
      
      - allow skipping mid_block
      
      - adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`
      
      - allow user to set dimension for the timestep embedding (`time_embed_dim`)
      
      - the kernel_size for `conv_in` and `conv_out` is now configurable
      
      - add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`
      
      - allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`
      
      - added 2 arguments `attn1_types` and `attn2_types`
      
        * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
      `BasicTransformerBlock` block with 2 cross-attention , otherwise we
      get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
      so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block;  note that I stil kept
      the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks
      
      - the position of downsample layer and upsample layer is now configurable
      
      - in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
      this use case
      
      - if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
      inside cross attention block
      
      add up/down blocks for k-upscaler
      
      modify CrossAttention class
      
      - make the `dropout` layer in `to_out` optional
      
      - `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
      attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d
      
      - `cross_attention_norm` - add an optional layernorm on encoder_hidden_states
      
      - `attention_dropout`: add an optional dropout on attention score
      
      adapt BasicTransformerBlock
      
      - add an ada groupnorm layer  to conditioning attention input with timestep embedding
      
      - allow skipping the FeedForward layer in between the attentions
      
      - replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration
      
      update timestep embedding: add new act_fn  gelu and an optional act_2
      
      modified ResnetBlock2D
      
      - refactored with AdaGroupNorm class (the timestep scale shift normalization)
      
      - add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv
      
      - add option to use input AdaGroupNorm on the input instead of groupnorm
      
      - add options to add a dropout layer after each conv
      
      - allow user to set the bias in conv_shortcut (needed for k-upscaler)
      
      - add gelu
      
      adding conversion script for k-upscaler unet
      
      add pipeline
      
      * fix attention mask
      
      * fix a typo
      
      * fix a bug
      
      * make sure model can be used with GPU
      
      * make pipeline work with fp16
      
      * fix an error in BasicTransfomerBlock
      
      * make style
      
      * fix typo
      
      * some more fixes
      
      * uP
      
      * up
      
      * correct more
      
      * some clean-up
      
      * clean time proj
      
      * up
      
      * uP
      
      * more changes
      
      * remove the upcast_attention=True from unet config
      
      * remove attn1_types, attn2_types etc
      
      * fix
      
      * revert incorrect changes up/down samplers
      
      * make style
      
      * remove outdated files
      
      * Apply suggestions from code review
      
      * attention refactor
      
      * refactor cross attention
      
      * Apply suggestions from code review
      
      * update
      
      * up
      
      * update
      
      * Apply suggestions from code review
      
      * finish
      
      * Update src/diffusers/models/cross_attention.py
      
      * more fixes
      
      * up
      
      * up
      
      * up
      
      * finish
      
      * more corrections of conversion state
      
      * act_2 -> act_2_fn
      
      * remove dropout_after_conv from ResnetBlock2D
      
      * make style
      
      * simplify KAttentionBlock
      
      * add fast test for latent upscaler pipeline
      
      * add slow test
      
      * slow test fp16
      
      * make style
      
      * add doc string for pipeline_stable_diffusion_latent_upscale
      
      * add api doc page for latent upscaler pipeline
      
      * deprecate attention mask
      
      * clean up embeddings
      
      * simplify resnet
      
      * up
      
      * clean up resnet
      
      * up
      
      * correct more
      
      * up
      
      * up
      
      * improve a bit more
      
      * correct more
      
      * more clean-ups
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add docstrings for new unet config
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * # Copied from
      
      * encode the image if not latent
      
      * remove force casting vae to fp32
      
      * fix
      
      * add comments about preconditioning parameters from k-diffusion paper
      
      * attn1_type, attn2_type -> add_self_attention
      
      * clean up get_down_block and get_up_block
      
      * fix
      
      * fixed a typo(?) in ada group norm
      
      * update slice attention processer for cross attention
      
      * update slice
      
      * fix fast test
      
      * update the checkpoint
      
      * finish tests
      
      * fix-copies
      
      * fix-copy for modeling_text_unet.py
      
      * make style
      
      * make style
      
      * fix f-string
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix import
      
      * correct changes
      
      * fix resnet
      
      * make fix-copies
      
      * correct euler scheduler
      
      * add missing #copied from for preprocess
      
      * revert
      
      * fix
      
      * fix copies
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/models/cross_attention.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * clean up conversion script
      
      * KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D
      
      * more
      
      * Update src/diffusers/models/unet_2d_condition.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * remove prepare_extra_step_kwargs
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix a typo in timestep embedding
      
      * remove num_image_per_prompt
      
      * fix fasttest
      
      * make style + fix-copies
      
      * fix
      
      * fix xformer test
      
      * fix style
      
      * doc string
      
      * make style
      
      * fix-copies
      
      * docstring for time_embedding_norm
      
      * make style
      
      * final finishes
      
      * make fix-copies
      
      * fix tests
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu@yis-macbook-pro.lan>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      1051ca81