1. 24 Jun, 2024 1 commit
    • drhead's avatar
      Add extra performance features for EMAModel, torch._foreach operations and... · 2ada094b
      drhead authored
      
      Add extra performance features for EMAModel, torch._foreach operations and better support for non-blocking CPU offloading (#7685)
      
      * Add support for _foreach operations and non-blocking to EMAModel
      
      * default foreach to false
      
      * add non-blocking EMA offloading to SD1.5 T2I example script
      
      * fix whitespace
      
      * move foreach to cli argument
      
      * linting
      
      * Update README.md re: EMA weight training
      
      * correct args.foreach_ema
      
      * add tests for foreach ema
      
      * code quality
      
      * add foreach to from_pretrained
      
      * default foreach false
      
      * fix linting
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatardrhead <a@a.a>
      2ada094b
  2. 23 Jun, 2024 1 commit
  3. 22 Jun, 2024 1 commit
  4. 21 Jun, 2024 4 commits
  5. 20 Jun, 2024 1 commit
  6. 19 Jun, 2024 2 commits
  7. 18 Jun, 2024 10 commits
  8. 17 Jun, 2024 1 commit
  9. 16 Jun, 2024 1 commit
  10. 13 Jun, 2024 6 commits
  11. 12 Jun, 2024 7 commits
  12. 11 Jun, 2024 2 commits
  13. 10 Jun, 2024 1 commit
  14. 07 Jun, 2024 2 commits
    • Lucain's avatar
      Move away from `cached_download` (#8419) · 0d68ddf3
      Lucain authored
      * Move away from
      
      * unused constant
      
      * Add custom error
      0d68ddf3
    • Sayak Paul's avatar
      [Core] support saving and loading of sharded checkpoints (#7830) · 7d887118
      Sayak Paul authored
      
      
      * feat: support saving a model in sharded checkpoints.
      
      * feat: make loading of sharded checkpoints work.
      
      * add tests
      
      * cleanse the loading logic a bit more.
      
      * more resilience while loading from the Hub.
      
      * parallelize shard downloads by using snapshot_download()/
      
      * default to a shard size.
      
      * more fix
      
      * Empty-Commit
      
      * debug
      
      * fix
      
      * uality
      
      * more debugging
      
      * fix more
      
      * initial comments from Benjamin
      
      * move certain methods to loading_utils
      
      * add test to check if the correct number of shards are present.
      
      * add a test to check if loading of sharded checkpoints from the Hub is okay
      
      * clarify the unit when passed as an int.
      
      * use hf_hub for sharding.
      
      * remove unnecessary code
      
      * remove unnecessary function
      
      * lucain's comments.
      
      * fixes
      
      * address high-level comments.
      
      * fix test
      
      * subfolder shenanigans./
      
      * Update src/diffusers/utils/hub_utils.py
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      
      * remove _huggingface_hub_version as not needed.
      
      * address more feedback.
      
      * add a test for local_files_only=True/
      
      * need hf hub to be at least 0.23.2
      
      * style
      
      * final comment.
      
      * clean up subfolder.
      
      * deal with suffixes in code.
      
      * _add_variant default.
      
      * use weights_name_pattern
      
      * remove add_suffix_keyword
      
      * clean up downloading of sharded ckpts.
      
      * don't return something special when using index.json
      
      * fix more
      
      * don't use bare except
      
      * remove comments and catch the errors better
      
      * fix a couple of things when using is_file()
      
      * empty
      
      ---------
      Co-authored-by: default avatarLucain <lucainp@gmail.com>
      7d887118