"docs/vscode:/vscode.git/clone" did not exist on "57a77c450996b066713edd09ed491bbe6622a6c8"
  1. 11 Jul, 2024 2 commits
    • Xin Ma's avatar
      Latte: Latent Diffusion Transformer for Video Generation (#8404) · b8cf84a3
      Xin Ma authored
      
      
      * add Latte to diffusers
      
      * remove print
      
      * remove print
      
      * remove print
      
      * remove unuse codes
      
      * remove layer_norm_latte and add a flag
      
      * remove layer_norm_latte and add a flag
      
      * update latte_pipeline
      
      * update latte_pipeline
      
      * remove unuse squeeze
      
      * add norm_hidden_states.ndim == 2: # for Latte
      
      * fixed test latte pipeline bugs
      
      * fixed test latte pipeline bugs
      
      * delete sh
      
      * add doc for latte
      
      * add licensing
      
      * Move Transformer3DModelOutput to modeling_outputs
      
      * give a default value to sample_size
      
      * remove the einops dependency
      
      * change norm2 for latte
      
      * modify pipeline of latte
      
      * update test for Latte
      
      * modify some codes for latte
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * modify for Latte pipeline
      
      * video_length -> num_frames; update prepare_latents copied from
      
      * make fix-copies
      
      * make style
      
      * typo: videe -> video
      
      * update
      
      * modify for Latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify latte pipeline
      
      * modify for Latte pipeline
      
      * Delete .vscode directory
      
      * make style
      
      * make fix-copies
      
      * add latte transformer 3d to docs _toctree.yml
      
      * update example
      
      * reduce frames for test
      
      * fixed bug of _text_preprocessing
      
      * set num frame to 1 for testing
      
      * remove unuse print
      
      * add text = self._clean_caption(text) again
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarAryan <contact.aryanvs@gmail.com>
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      b8cf84a3
    • Alan Du's avatar
      Reformat docstring for `get_timestep_embedding` (#8811) · 673eb60f
      Alan Du authored
      
      
      * Reformat docstring for `get_timestep_embedding`
      
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      673eb60f
  2. 08 Jul, 2024 2 commits
  3. 06 Jul, 2024 1 commit
  4. 04 Jul, 2024 1 commit
  5. 03 Jul, 2024 4 commits
  6. 02 Jul, 2024 3 commits
  7. 01 Jul, 2024 2 commits
  8. 27 Jun, 2024 2 commits
    • Sayak Paul's avatar
      [Chore] perform better deprecation for vqmodeloutput (#8719) · d5dd8df3
      Sayak Paul authored
      perform better deprecation for vqmodeloutput
      d5dd8df3
    • Mathis Koroglu's avatar
      Motion Model / Adapter versatility (#8301) · 3e0d128d
      Mathis Koroglu authored
      * Motion Model / Adapter versatility
      
      - allow to use a different number of layers per block
      - allow to use a different number of transformer per layers per block
      - allow a different number of motion attention head per block
      - use dropout argument in get_down/up_block in 3d blocks
      
      * Motion Model added arguments renamed & refactoring
      
      * Add test for asymmetric UNetMotionModel
      3e0d128d
  9. 26 Jun, 2024 2 commits
  10. 25 Jun, 2024 1 commit
  11. 24 Jun, 2024 1 commit
  12. 21 Jun, 2024 2 commits
  13. 20 Jun, 2024 1 commit
  14. 19 Jun, 2024 1 commit
  15. 18 Jun, 2024 5 commits
  16. 13 Jun, 2024 1 commit
  17. 12 Jun, 2024 5 commits
  18. 10 Jun, 2024 1 commit
  19. 07 Jun, 2024 1 commit
    • Sayak Paul's avatar
      [Core] support saving and loading of sharded checkpoints (#7830) · 7d887118
      Sayak Paul authored
      
      
      * feat: support saving a model in sharded checkpoints.
      
      * feat: make loading of sharded checkpoints work.
      
      * add tests
      
      * cleanse the loading logic a bit more.
      
      * more resilience while loading from the Hub.
      
      * parallelize shard downloads by using snapshot_download()/
      
      * default to a shard size.
      
      * more fix
      
      * Empty-Commit
      
      * debug
      
      * fix
      
      * uality
      
      * more debugging
      
      * fix more
      
      * initial comments from Benjamin
      
      * move certain methods to loading_utils
      
      * add test to check if the correct number of shards are present.
      
      * add a test to check if loading of sharded checkpoints from the Hub is okay
      
      * clarify the unit when passed as an int.
      
      * use hf_hub for sharding.
      
      * remove unnecessary code
      
      * remove unnecessary function
      
      * lucain's comments.
      
      * fixes
      
      * address high-level comments.
      
      * fix test
      
      * subfolder shenanigans./
      
      * Update src/diffusers/utils/hub_utils.py
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      
      * remove _huggingface_hub_version as not needed.
      
      * address more feedback.
      
      * add a test for local_files_only=True/
      
      * need hf hub to be at least 0.23.2
      
      * style
      
      * final comment.
      
      * clean up subfolder.
      
      * deal with suffixes in code.
      
      * _add_variant default.
      
      * use weights_name_pattern
      
      * remove add_suffix_keyword
      
      * clean up downloading of sharded ckpts.
      
      * don't return something special when using index.json
      
      * fix more
      
      * don't use bare except
      
      * remove comments and catch the errors better
      
      * fix a couple of things when using is_file()
      
      * empty
      
      ---------
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      7d887118
  20. 06 Jun, 2024 1 commit
  21. 05 Jun, 2024 1 commit
    • Sayak Paul's avatar
      [LoRA] Remove legacy LoRA code and related adjustments (#8316) · a0542c19
      Sayak Paul authored
      * remove legacy code from load_attn_procs.
      
      * finish first draft
      
      * fix more.
      
      * fix more
      
      * add test
      
      * add serialization support.
      
      * fix-copies
      
      * require peft backend for lora tests
      
      * style
      
      * fix test
      
      * fix loading.
      
      * empty
      
      * address benjamin's feedback.
      a0542c19