1. 09 Jan, 2026 1 commit
  2. 25 Nov, 2025 1 commit
    • Sayak Paul's avatar
      let's go Flux2 🚀 (#12711) · 5ffb73d4
      Sayak Paul authored
      
      
      * add vae
      
      * Initial commit for Flux 2 Transformer implementation
      
      * add pipeline part
      
      * small edits to the pipeline and conversion
      
      * update conversion script
      
      * fix
      
      * up up
      
      * finish pipeline
      
      * Remove Flux IP Adapter logic for now
      
      * Remove deprecated 3D id logic
      
      * Remove ControlNet logic for now
      
      * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block
      
      * update pipeline
      
      * Don't use biases for input projs and output AdaNorm
      
      * up
      
      * Remove bias for double stream block text QKV projections
      
      * Add script to convert Flux 2 transformer to diffusers
      
      * make style and make quality
      
      * fix a few things.
      
      * allow sft files to go.
      
      * fix image processor
      
      * fix batch
      
      * style a bit
      
      * Fix some bugs in Flux 2 transformer implementation
      
      * Fix dummy input preparation and fix some test bugs
      
      * fix dtype casting in timestep guidance module.
      
      * resolve conflicts.,
      
      * remove ip adapter stuff.
      
      * Fix Flux 2 transformer consistency test
      
      * Fix bug in Flux2TransformerBlock (double stream block)
      
      * Get remaining Flux 2 transformer tests passing
      
      * make style; make quality; make fix-copies
      
      * remove stuff.
      
      * fix type annotaton.
      
      * remove unneeded stuff from tests
      
      * tests
      
      * up
      
      * up
      
      * add sf support
      
      * Remove unused IP Adapter and ControlNet logic from transformer (#9)
      
      * copied from
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * Refactor Flux2Attention into separate classes for double stream and single stream attention
      
      * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion
      
      * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False
      
      * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion
      
      * Address review comments
      
      * Update src/diffusers/pipelines/flux2/pipeline_flux2.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * up
      
      * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12)
      
      * up
      
      * support ostris loras. (#13)
      
      * up
      
      * update schdule
      
      * up
      
      * up (#17)
      
      * add training scripts (#16)
      
      * add training scripts
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      
      * model cpu offload in validation.
      
      * add flux.2 readme
      
      * add img2img and tests
      
      * cpu offload in log validation
      
      * Apply suggestions from code review
      
      * fix
      
      * up
      
      * fixes
      
      * remove i2i training tests for now.
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      Co-authored-by: default avatarDaniel Gu <dgu8957@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal>
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      5ffb73d4
  3. 30 Sep, 2025 1 commit
  4. 22 Sep, 2025 1 commit
  5. 17 Sep, 2025 1 commit
    • DefTruth's avatar
      Fix many type hint errors (#12289) · efb7a299
      DefTruth authored
      * fix hidream type hint
      
      * fix hunyuan-video type hint
      
      * fix many type hint
      
      * fix many type hint errors
      
      * fix many type hint errors
      
      * fix many type hint errors
      
      * make stype & make quality
      efb7a299
  6. 17 Jul, 2025 1 commit
  7. 19 Jun, 2025 1 commit
  8. 19 May, 2025 1 commit
  9. 11 Feb, 2025 1 commit
  10. 18 Dec, 2024 1 commit
  11. 16 Dec, 2024 1 commit
    • Aryan's avatar
      [core] Hunyuan Video (#10136) · aace1f41
      Aryan authored
      
      
      * copy transformer
      
      * copy vae
      
      * copy pipeline
      
      * make fix-copies
      
      * refactor; make original code work with diffusers; test latents for comparison generated with this commit
      
      * move rope into pipeline; remove flash attention; refactor
      
      * begin conversion script
      
      * make style
      
      * refactor attention
      
      * refactor
      
      * refactor final layer
      
      * their mlp -> our feedforward
      
      * make style
      
      * add docs
      
      * refactor layer names
      
      * refactor modulation
      
      * cleanup
      
      * refactor norms
      
      * refactor activations
      
      * refactor single blocks attention
      
      * refactor attention processor
      
      * make style
      
      * cleanup a bit
      
      * refactor double transformer block attention
      
      * update mochi attn proc
      
      * use diffusers attention implementation in all modules; checkpoint for all values matching original
      
      * remove helper functions in vae
      
      * refactor upsample
      
      * refactor causal conv
      
      * refactor resnet
      
      * refactor
      
      * refactor
      
      * refactor
      
      * grad checkpointing
      
      * autoencoder test
      
      * fix scaling factor
      
      * refactor clip
      
      * refactor llama text encoding
      
      * add coauthor
      Co-Authored-By: default avatar"Gregory D. Hunkins" <greg@ollano.com>
      
      * refactor rope; diff: 0.14990234375; reason and fix: create rope grid on cpu and move to device
      
      Note: The following line diverges from original behaviour. We create the grid on the device, whereas
      original implementation creates it on CPU and then moves it to device. This results in numerical
      differences in layerwise debugging outputs, but visually it is the same.
      
      * use diffusers timesteps embedding; diff: 0.10205078125
      
      * rename
      
      * convert
      
      * update
      
      * add tests for transformer
      
      * add pipeline tests; text encoder 2 is not optional
      
      * fix attention implementation for torch
      
      * add example
      
      * update docs
      
      * update docs
      
      * apply suggestions from review
      
      * refactor vae
      
      * update
      
      * Apply suggestions from code review
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * make fix-copies
      
      * update
      
      ---------
      Co-authored-by: default avatar"Gregory D. Hunkins" <greg@ollano.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      aace1f41
  12. 21 Oct, 2024 1 commit
  13. 06 Sep, 2024 1 commit
    • Aryan's avatar
      [core] Freenoise memory improvements (#9262) · 6dfa4996
      Aryan authored
      * update
      
      * implement prompt interpolation
      
      * make style
      
      * resnet memory optimizations
      
      * more memory optimizations; todo: refactor
      
      * update
      
      * update animatediff controlnet with latest changes
      
      * refactor chunked inference changes
      
      * remove print statements
      
      * update
      
      * chunk -> split
      
      * remove changes from incorrect conflict resolution
      
      * remove changes from incorrect conflict resolution
      
      * add explanation of SplitInferenceModule
      
      * update docs
      
      * Revert "update docs"
      
      This reverts commit c55a50a271b2cefa8fe340a4f2a3ab9b9d374ec0.
      
      * update docstring for freenoise split inference
      
      * apply suggestions from review
      
      * add tests
      
      * apply suggestions from review
      6dfa4996
  14. 28 Aug, 2024 1 commit
    • Aryan's avatar
      AnimateDiff prompt travel (#9231) · cbc2ec8f
      Aryan authored
      * update
      
      * implement prompt interpolation
      
      * make style
      
      * resnet memory optimizations
      
      * more memory optimizations; todo: refactor
      
      * update
      
      * update animatediff controlnet with latest changes
      
      * refactor chunked inference changes
      
      * remove print statements
      
      * undo memory optimization changes
      
      * update docstrings
      
      * fix tests
      
      * fix pia tests
      
      * apply suggestions from review
      
      * add tests
      
      * update comment
      cbc2ec8f
  15. 07 Aug, 2024 1 commit
    • Aryan's avatar
      [core] FreeNoise (#8948) · 16a93f1a
      Aryan authored
      
      
      * initial work draft for freenoise; needs massive cleanup
      
      * fix freeinit bug
      
      * add animatediff controlnet implementation
      
      * revert attention changes
      
      * add freenoise
      
      * remove old helper functions
      
      * add decode batch size param to all pipelines
      
      * make style
      
      * fix copied from comments
      
      * make fix-copies
      
      * make style
      
      * copy animatediff controlnet implementation from #8972
      
      * add experimental support for num_frames not perfectly fitting context length, ocntext stride
      
      * make unet motion model lora work again based on #8995
      
      * copy load video utils from #8972
      
      * copied from AnimateDiff::prepare_latents
      
      * address the case where last batch of frames does not match length of indices in prepare latents
      
      * decode_batch_size->vae_batch_size; batch vae encode support in animatediff vid2vid
      
      * revert sparsectrl and sdxl freenoise changes
      
      * revert pia
      
      * add freenoise tests
      
      * make fix-copies
      
      * improve docstrings
      
      * add freenoise tests to animatediff controlnet
      
      * update tests
      
      * Update src/diffusers/models/unets/unet_motion_model.py
      
      * add freenoise to animatediff pag
      
      * address review comments
      
      * make style
      
      * update tests
      
      * make fix-copies
      
      * fix error message
      
      * remove copied from comment
      
      * fix imports in tests
      
      * update
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      16a93f1a
  16. 06 Aug, 2024 1 commit
  17. 30 Jul, 2024 1 commit
    • Yoach Lacombe's avatar
      Stable Audio integration (#8716) · 69e72b1d
      Yoach Lacombe authored
      
      
      * WIP modeling code and pipeline
      
      * add custom attention processor + custom activation + add to init
      
      * correct ProjectionModel forward
      
      * add stable audio to __initèè
      
      * add autoencoder and update pipeline and modeling code
      
      * add half Rope
      
      * add partial rotary v2
      
      * add temporary modfis to scheduler
      
      * add EDM DPM Solver
      
      * remove TODOs
      
      * clean GLU
      
      * remove att.group_norm to attn processor
      
      * revert back src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
      
      * refactor GLU -> SwiGLU
      
      * remove redundant args
      
      * add channel multiples in autoencoder docstrings
      
      * changes in docsrtings and copyright headers
      
      * clean pipeline
      
      * further cleaning
      
      * remove peft and lora and fromoriginalmodel
      
      * Delete src/diffusers/pipelines/stable_audio/diffusers.code-workspace
      
      * make style
      
      * dummy models
      
      * fix copied from
      
      * add fast oobleck tests
      
      * add brownian tree
      
      * oobleck autoencoder slow tests
      
      * remove TODO
      
      * fast stable audio pipeline tests
      
      * add slow tests
      
      * make style
      
      * add first version of docs
      
      * wrap is_torchsde_available to the scheduler
      
      * fix slow test
      
      * test with input waveform
      
      * add input waveform
      
      * remove some todos
      
      * create stableaudio gaussian projection + make style
      
      * add pipeline to toctree
      
      * fix copied from
      
      * make quality
      
      * refactor timestep_features->time_proj
      
      * refactor joint_attention_kwargs->cross_attention_kwargs
      
      * remove forward_chunk
      
      * move StableAudioDitModel to transformers folder
      
      * correct convert + remove partial rotary embed
      
      * apply suggestions from yiyixuxu -> removing attn.kv_heads
      
      * remove temb
      
      * remove cross_attention_kwargs
      
      * further removal of cross_attention_kwargs
      
      * remove text encoder autocast to fp16
      
      * continue removing autocast
      
      * make style
      
      * refactor how text and audio are embedded
      
      * add paper
      
      * update example code
      
      * make style
      
      * unify projection model forward + fix device placement
      
      * make style
      
      * remove fuse qkv
      
      * apply suggestions from review
      
      * Update src/diffusers/pipelines/stable_audio/pipeline_stable_audio.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * make style
      
      * smaller models in fast tests
      
      * pass sequential offloading fast tests
      
      * add docs for vae and autoencoder
      
      * make style and update example
      
      * remove useless import
      
      * add cosine scheduler
      
      * dummy classes
      
      * cosine scheduler docs
      
      * better description of scheduler
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      69e72b1d
  18. 11 Jul, 2024 1 commit
    • Xin Ma's avatar
      Latte: Latent Diffusion Transformer for Video Generation (#8404) · b8cf84a3
      Xin Ma authored
      
      
      * add Latte to diffusers
      
      * remove print
      
      * remove print
      
      * remove print
      
      * remove unuse codes
      
      * remove layer_norm_latte and add a flag
      
      * remove layer_norm_latte and add a flag
      
      * update latte_pipeline
      
      * update latte_pipeline
      
      * remove unuse squeeze
      
      * add norm_hidden_states.ndim == 2: # for Latte
      
      * fixed test latte pipeline bugs
      
      * fixed test latte pipeline bugs
      
      * delete sh
      
      * add doc for latte
      
      * add licensing
      
      * Move Transformer3DModelOutput to modeling_outputs
      
      * give a default value to sample_size
      
      * remove the einops dependency
      
      * change norm2 for latte
      
      * modify pipeline of latte
      
      * update test for Latte
      
      * modify some codes for latte
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * video_length -> num_frames; update prepare_latents copied from
      
      * make fix-copies
      
      * make style
      
      * typo: videe -> video
      
      * update
      
      * modify for Latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify for Latte pipeline
      
      * Delete .vscode directory
      
      * make style
      
      * make fix-copies
      
      * add latte transformer 3d to docs _toctree.yml
      
      * update example
      
      * reduce frames for test
      
      * fixed bug of _text_preprocessing
      
      * set num frame to 1 for testing
      
      * remove unuse print
      
      * add text = self._clean_caption(text) again
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarAryan <contact.aryanvs@gmail.com>
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      b8cf84a3
  19. 08 Jul, 2024 1 commit
  20. 02 Jul, 2024 1 commit
  21. 12 Jun, 2024 1 commit
  22. 10 May, 2024 1 commit
    • Mark Van Aken's avatar
      #7535 Update FloatTensor type hints to Tensor (#7883) · be4afa0b
      Mark Van Aken authored
      * find & replace all FloatTensors to Tensor
      
      * apply formatting
      
      * Update torch.FloatTensor to torch.Tensor in the remaining files
      
      * formatting
      
      * Fix the rest of the places where FloatTensor is used as well as in documentation
      
      * formatting
      
      * Update new file from FloatTensor to Tensor
      be4afa0b
  23. 02 Apr, 2024 1 commit
  24. 14 Mar, 2024 1 commit
    • M. Tolga Cangöz's avatar
      [`Tests`] Update a deprecated parameter in test files and fix several typos (#7277) · 5d848ec0
      M. Tolga Cangöz authored
      * Add properties and `IPAdapterTesterMixin` tests for `StableDiffusionPanoramaPipeline`
      
      * Fix variable name typo and update comments
      
      * Update deprecated `output_type="numpy"` to "np" in test files
      
      * Discard changes to src/diffusers/pipelines/stable_diffusion_panorama/pipeline_stable_diffusion_panorama.py
      
      * Update test_stable_diffusion_panorama.py
      
      * Update numbers in README.md
      
      * Update get_guidance_scale_embedding method to use timesteps instead of w
      
      * Update number of checkpoints in README.md
      
      * Add type hints and fix var name
      
      * Fix PyTorch's convention for inplace functions
      
      * Fix a typo
      
      * Revert "Fix PyTorch's convention for inplace functions"
      
      This reverts commit 74350cf65b2c9aa77f08bec7937d7a8b13edb509.
      
      * Fix typos
      
      * Indent
      
      * Refactor get_guidance_scale_embedding method in LEditsPPPipelineStableDiffusionXL class
      5d848ec0
  25. 13 Mar, 2024 1 commit
    • Sayak Paul's avatar
      [LoRA] use the PyTorch classes wherever needed and start depcrecation cycles (#7204) · 531e7191
      Sayak Paul authored
      * fix PyTorch classes and start deprecsation cycles.
      
      * remove args crafting for accommodating scale.
      
      * remove scale check in feedforward.
      
      * assert against nn.Linear and not CompatibleLinear.
      
      * remove conv_cls and lineaR_cls.
      
      * remove scale
      
      * 👋
      
       scale.
      
      * fix: unet2dcondition
      
      * fix attention.py
      
      * fix: attention.py again
      
      * fix: unet_2d_blocks.
      
      * fix-copies.
      
      * more fixes.
      
      * fix: resnet.py
      
      * more fixes
      
      * fix i2vgenxl unet.
      
      * depcrecate scale gently.
      
      * fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * quality
      
      * throw warning when scale is passed to the the BasicTransformerBlock class.
      
      * remove scale from signature.
      
      * cross_attention_kwargs, very nice catch by Yiyi
      
      * fix: logger.warn
      
      * make deprecation message clearer.
      
      * address final comments.
      
      * maintain same depcrecation message and also add it to activations.
      
      * address yiyi
      
      * fix copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * more depcrecation
      
      * fix-copies
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      531e7191
  26. 08 Mar, 2024 1 commit
  27. 07 Mar, 2024 1 commit
  28. 08 Feb, 2024 2 commits
    • Sayak Paul's avatar
      change to 2024 in the license (#6902) · 30e5e81d
      Sayak Paul authored
      change to 2024
      30e5e81d
    • Sayak Paul's avatar
      [I2VGenXL] `attention_head_dim` in the UNet (#6872) · 491a933a
      Sayak Paul authored
      * attention_head_dim
      
      * debug
      
      * print more info
      
      * correct num_attention_heads behaviour
      
      * down_block_num_attention_heads -> num_attention_heads.
      
      * correct the image link in doc.
      
      * add: deprecation for num_attention_head
      
      * fix: test argument to use attention_head_dim
      
      * more fixes.
      
      * quality
      
      * address comments.
      
      * remove depcrecation.
      491a933a
  29. 06 Feb, 2024 2 commits
  30. 04 Feb, 2024 1 commit
  31. 31 Jan, 2024 1 commit
  32. 27 Dec, 2023 1 commit
  33. 21 Dec, 2023 1 commit
    • Will Berman's avatar
      open muse (#5437) · 40398152
      Will Berman authored
      
      
      amused
      
      rename
      
      Update docs/source/en/api/pipelines/amused.md
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      AdaLayerNormContinuous default values
      
      custom micro conditioning
      
      micro conditioning docs
      
      put lookup from codebook in constructor
      
      fix conversion script
      
      remove manual fused flash attn kernel
      
      add training script
      
      temp remove training script
      
      add dummy gradient checkpointing func
      
      clarify temperatures is an instance variable by setting it
      
      remove additional SkipFF block args
      
      hardcode norm args
      
      rename tests folder
      
      fix paths and samples
      
      fix tests
      
      add training script
      
      training readme
      
      lora saving and loading
      
      non-lora saving/loading
      
      some readme fixes
      
      guards
      
      Update docs/source/en/api/pipelines/amused.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      Update examples/amused/README.md
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      Update examples/amused/train_amused.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      vae upcasting
      
      add fp16 integration tests
      
      use tuple for micro cond
      
      copyrights
      
      remove casts
      
      delegate to torch.nn.LayerNorm
      
      move temperature to pipeline call
      
      upsampling/downsampling changes
      40398152
  34. 04 Dec, 2023 1 commit
    • takuoko's avatar
      [Feature] Support IP-Adapter Plus (#5915) · 0a08d419
      takuoko authored
      
      
      * Support IP-Adapter Plus
      
      * fix format
      
      * restore before black format
      
      * restore before black format
      
      * generic
      
      * Refactor PerceiverAttention
      
      * format
      
      * fix test and refactor PerceiverAttention
      
      * generic encode_image
      
      * keep attention implementation
      
      * merge tests
      
      * encode_image backward compatible
      
      * code quality
      
      * fix controlnet inpaint pipeline
      
      * refactor FFN
      
      * refactor FFN
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      0a08d419
  35. 29 Nov, 2023 1 commit
    • Suraj Patil's avatar
      Add SVD (#5895) · 63f767ef
      Suraj Patil authored
      
      
      * begin model
      
      * finish blocks
      
      * add_embedding
      
      * addition_time_embed_dim
      
      * use TimestepEmbedding
      
      * fix temporal res block
      
      * fix time_pos_embed
      
      * fix add_embedding
      
      * add conversion script
      
      * fix model
      
      * up
      
      * add new resnet blocks
      
      * make forward work
      
      * return sample in original shape
      
      * fix temb shape in TemporalResnetBlock
      
      * add spatio temporal transformers
      
      * add vae blocks
      
      * fix blocks
      
      * update
      
      * update
      
      * fix shapes in Alphablender and add time activation in res blcok
      
      * use new blocks
      
      * style
      
      * fix temb shape
      
      * fix SpatioTemporalResBlock
      
      * reuse TemporalBasicTransformerBlock
      
      * fix TemporalBasicTransformerBlock
      
      * use TransformerSpatioTemporalModel
      
      * fix TransformerSpatioTemporalModel
      
      * fix time_context dim
      
      * clean up
      
      * make temb optional
      
      * add blocks
      
      * rename model
      
      * update conversion script
      
      * remove UNetMidBlockSpatioTemporal
      
      * add in init
      
      * remove unused arg
      
      * remove unused arg
      
      * remove more unsed args
      
      * up
      
      * up
      
      * check for None
      
      * update vae
      
      * update up/mid blocks for decoder
      
      * begin pipeline
      
      * adapt scheduler
      
      * add guidance scalings
      
      * fix norm eps in temporal transformers
      
      * add temporal autoencoder
      
      * make pipeline run
      
      * fix frame decodig
      
      * decode in float32
      
      * decode n frames at a time
      
      * pass decoding_t to decode_latents
      
      * fix decode_latents
      
      * vae encode/decode in fp32
      
      * fix dtype in TransformerSpatioTemporalModel
      
      * type image_latents same as image_embeddings
      
      * allow using differnt eps in temporal block for video decoder
      
      * fix default values in vae
      
      * pass num frames in decode
      
      * switch spatial to temporal for mixing in VAE
      
      * fix num frames during split decoding
      
      * cast alpha to sample dtype
      
      * fix attention in MidBlockTemporalDecoder
      
      * fix typo
      
      * fix guidance_scales dtype
      
      * fix missing activation in TemporalDecoder
      
      * skip_post_quant_conv
      
      * add vae conversion
      
      * style
      
      * take guidance scale as input
      
      * up
      
      * allow passing PIL to export_video
      
      * accept fps as arg
      
      * add pipeline and vae in init
      
      * remove hack
      
      * use AutoencoderKLTemporalDecoder
      
      * don't scale image latents
      
      * add unet tests
      
      * clean up unet
      
      * clean TransformerSpatioTemporalModel
      
      * add slow svd test
      
      * clean up
      
      * make temb optional in Decoder mid block
      
      * fix norm eps in TransformerSpatioTemporalModel
      
      * clean up temp decoder
      
      * clean up
      
      * clean up
      
      * use c_noise values for timesteps
      
      * use math for log
      
      * update
      
      * fix copies
      
      * doc
      
      * upcast vae
      
      * update forward pass for gradient checkpointing
      
      * make added_time_ids is tensor
      
      * up
      
      * fix upcasting
      
      * remove post quant conv
      
      * add _resize_with_antialiasing
      
      * fix _compute_padding
      
      * cleanup model
      
      * more cleanup
      
      * more cleanup
      
      * more cleanup
      
      * remove freeu
      
      * remove attn slice
      
      * small clean
      
      * up
      
      * up
      
      * remove extra step kwargs
      
      * remove eta
      
      * remove dropout
      
      * remove callback
      
      * remove merge factor args
      
      * clean
      
      * clean up
      
      * move to dedicated folder
      
      * remove attention_head_dim
      
      * docstr and small fix
      
      * update unet doc strings
      
      * rename decoding_t
      
      * correct linting
      
      * store c_skip and c_out
      
      * cleanup
      
      * clean TemporalResnetBlock
      
      * more cleanup
      
      * clean up vae
      
      * clean up
      
      * begin doc
      
      * more cleanup
      
      * up
      
      * up
      
      * doc
      
      * Improve
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * better naming
      
      * Apply suggestions from code review
      
      * Default chunk size to None
      
      * add example
      
      * Better
      
      * Apply suggestions from code review
      
      * update doc
      
      * Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * style
      
      * Get torch compile working
      
      * up
      
      * rename
      
      * fix doc
      
      * add chunking
      
      * torch compile
      
      * torch compile
      
      * add modelling outputs
      
      * torch compile
      
      * Improve chunking
      
      * Apply suggestions from code review
      
      * Update docs/source/en/using-diffusers/svd.md
      
      * Close diff tag
      
      * remove slicing
      
      * resnet docstr
      
      * add docstr in resnet
      
      * rename
      
      * Apply suggestions from code review
      
      * update tests
      
      * Fix output type latents
      
      * fix more
      
      * fix more
      
      * Update docs/source/en/using-diffusers/svd.md
      
      * fix more
      
      * add pipeline tests
      
      * remove unused arg
      
      * clean  up
      
      * make sure get_scaling receives tensors
      
      * fix euler scheduler
      
      * fix get_scalings
      
      * simply euler for now
      
      * remove old test file
      
      * use randn_tensor to create noise
      
      * fix device for rand tensor
      
      * increase expected_max_difference
      
      * fix test_inference_batch_single_identical
      
      * actually fix test_inference_batch_single_identical
      
      * disable test_save_load_float16
      
      * skip test_float16_inference
      
      * skip test_inference_batch_single_identical
      
      * fix test_xformers_attention_forwardGenerator_pass
      
      * Apply suggestions from code review
      
      * update StableVideoDiffusionPipelineSlowTests
      
      * update image
      
      * add diffusers example
      
      * fix more
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      63f767ef
  36. 07 Nov, 2023 1 commit
  37. 06 Nov, 2023 1 commit
    • Sayak Paul's avatar
      [Feat] PixArt-Alpha (#5642) · d61889fc
      Sayak Paul authored
      
      
      * init pixart alpha pipeline
      
      * fix: import
      
      * script
      
      * script
      
      * script
      
      * add: vae to the pipeline
      
      * add: vae_scale_factor
      
      * add: checkpoint_path
      
      * clean conversion script a bit.
      
      * size embeddings.
      
      * fix: size embedding
      
      * update scrip
      
      * support for interpolation of position embedding.
      
      * support for conditioning.
      
      * ..
      
      * ..
      
      * ..
      
      * final layer
      
      * final layer
      
      * align if encode_prompt
      
      * support for caption embedding
      
      * refactor
      
      * refactor
      
      * refactor
      
      * start cross attention
      
      * start cross attention
      
      * cross_attention_dim
      
      * cross
      
      * cross
      
      * support for resolution and aspect_ratio
      
      * support for caption projection
      
      * refactor patch embeddings
      
      * batch_size
      
      * up
      
      * commit
      
      * commit
      
      * commit.
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze
      
      * squeeze.
      
      * squeeze.
      
      * fix final block./
      
      * fix final block./
      
      * fix final block./
      
      * clean
      
      * fix: interpolation scale.
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging'
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * make --checkpoint_path non-required.
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * debugging
      
      * remove num_tokens
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * timesteps -> timestep
      
      * debug
      
      * debug
      
      * update conversion script.
      
      * update conversion script.
      
      * update conversion script.
      
      * debug
      
      * debug
      
      * debug
      
      * clean
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * debug
      
      * deug
      
      * debug
      
      * debug
      
      * debug
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * clean
      
      * fix
      
      * fix
      
      * boom
      
      * boom
      
      * some changes
      
      * boom
      
      * save
      
      * up
      
      * remove i
      
      * fix more tests
      
      * DPMSolverMultistepScheduler
      
      * fix
      
      * offloading
      
      * fix conversion script
      
      * fix conversion script
      
      * remove print
      
      * remove support for negative prompt embeds.
      
      * typo.
      
      * remove extra kwargs
      
      * bring conversion script to where it was
      
      * fix
      
      * trying mu luck
      
      * trying my luck again
      
      * again
      
      * again
      
      * again
      
      * clean up
      
      * up
      
      * up
      
      * update example
      
      * support for 512
      
      * remove spacing
      
      * finalize docs.
      
      * test debug
      
      * fix: assertion values.
      
      * debug
      
      * debug
      
      * debug
      
      * fix: repeat
      
      * remove prints.
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Correct more
      
      * Apply suggestions from code review
      
      * Change all
      
      * Clean more
      
      * fix more
      
      * Fix more
      
      * Fix more
      
      * Correct more
      
      * address patrick's comments.
      
      * remove unneeded args
      
      * clean up pipeline.
      
      * sty;e
      
      * make the use of additional conditions better conditioned.
      
      * None better
      
      * dtype
      
      * height and width validation
      
      * add a note about size brackets.
      
      * fix
      
      * spit out slow test outputs.
      
      * fix?
      
      * fix optional test
      
      * fix more
      
      * remove unneeded comment
      
      * debug
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d61889fc
  38. 02 Nov, 2023 1 commit
    • Dhruv Nair's avatar
      Animatediff Proposal (#5413) · 2a8cf8e3
      Dhruv Nair authored
      * draft design
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * clean  up
      
      * clean up
      
      * clean up
      
      * clean up
      
      * update pipeline
      
      * clean up
      
      * clean up
      
      * clean up
      
      * add tests
      
      * change motion block
      
      * clean up
      
      * clean up
      
      * clean up
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      * clean up
      
      * update
      
      * update
      
      * update model test
      
      * update
      
      * update
      
      * update
      
      * update
      
      * make style
      
      * update
      
      * fix embeddings
      
      * update
      
      * merge upstream
      
      * max fix copies
      
      * fix bug
      
      * fix mistake
      
      * add docs
      
      * update
      
      * clean up
      
      * update
      
      * clean up
      
      * clean up
      
      * fix docstrings
      
      * fix docstrings
      
      * update
      
      * update
      
      * clean  up
      
      * update
      2a8cf8e3