1. 02 Nov, 2021 1 commit
  2. 01 Nov, 2021 2 commits
    • Min Xu's avatar
      [feat] [FSDP]: add experimental support to shared weights (#836) · f2af4c66
      Min Xu authored
      
      
      * added a new test, passing without shared weights
      
      * tested weight sharing
      
      * added the test to test list file
      
      * extended to world_size = 2
      
      * fixed test
      
      * [feat]: add limited and experimental support for shared parameter
      
      * fixed tests
      
      * simplify to work with layer with at least 1 non-shared params and add code to pick up linked_param field for sharding the shared param
      
      * fixed the case where linked param is not in separate FSDP
      
      * changelog and remove old code
      Co-authored-by: default avatarMin Xu <min.xu.public@gmail.com>
      f2af4c66
    • anj-s's avatar
      [feature] Add the low level SSD APIs (#829) · a9fcaa28
      anj-s authored
      * add doc strings
      
      * add lower level SSD APIs and tests
      
      * add the test to the list to be run
      
      * remove unused imports
      
      * more doc string changes
      
      * fix lint errors
      a9fcaa28
  3. 28 Oct, 2021 1 commit
  4. 27 Oct, 2021 6 commits
  5. 24 Oct, 2021 1 commit
  6. 22 Oct, 2021 2 commits
    • anj-s's avatar
      modify golden data (#825) · 35f327f3
      anj-s authored
      35f327f3
    • Eugen Hotaj's avatar
      Extend auto shard capabilities to work around torch.fx edge cases. (#817) · 7bdf50a3
      Eugen Hotaj authored
      auto_shard.py currently uses torch.fx to create a symbolic DAG of
      operations and linearizes that DAG into an nn.Sequential so it can later
      be used for model offloading. This works in most cases but runs into
      issues for certain eager mode features, such as dynamic conditionals,
      shape-dependent computation, etc.
      
      This PR extends auto_shard.py to first run a preprocessing step which wraps
      any nn.Module which cannot be traced through. It adds a test for dynamic
      conditionals and updates existing failing test code.
      
      There are some immediate extensions to this approach which are marked as
      TODO in the code.
      7bdf50a3
  7. 21 Oct, 2021 2 commits
  8. 20 Oct, 2021 3 commits
  9. 19 Oct, 2021 1 commit
  10. 28 Sep, 2021 1 commit
  11. 24 Sep, 2021 1 commit
  12. 22 Sep, 2021 1 commit
    • tmarkstrum's avatar
      Switch default branch from master to main (#807) · b09ddb2d
      tmarkstrum authored
      * update master branch to main
      
      * added FAQ about updating the branch from master to main
      
      * fixed some false positive correction
      
      * added what is new section
      
      * fixed the quoted code area
      
      * added release what is new section
      
      * added a step in release.md
      
      * fixed a word
      b09ddb2d
  13. 21 Sep, 2021 1 commit
  14. 20 Sep, 2021 1 commit
  15. 17 Sep, 2021 1 commit
  16. 13 Sep, 2021 1 commit
  17. 12 Sep, 2021 2 commits
    • Min Xu's avatar
      [fix] minor fixes for master branch (#792) · 31e36453
      Min Xu authored
      
      
      * add changelog for previous commit
      
      * add changelog for previous commit
      
      * add changelog for previous commit
      
      * fix a merge induced error
      Co-authored-by: default avatarMin Xu <min.xu.public@gmail.com>
      31e36453
    • Darryl Barnhart's avatar
      [fix] FSDP intra-backwards gradient accumulation. (#784) · 4fa2ab9b
      Darryl Barnhart authored
      * [fix] FSDP intra-backwards gradient accumulation.
      
      Ensure gradient reduction accumulates into the unsharded gradient tensor
      within a backwards pass. This matters when an FSDP module is called
      multiple times within a forward pass, and reduction is _not_ deferred
      using activation checkpoint forward counters, bucketing or some other
      mechanism.
      
      Closes #780
      
      * [refactor] Remove forward counters. Comments.
      
      Removed forward counters from the activation checkpointing utility, now
      that FSDP does not require them for correct operation. Add more detailed
      comment about memory usage behaviour with gradient reduction.
      
      * [refactor] Delete deprecated forward counter usage.
      
      * [refactor] Add state assertion as end of pre-backward hook.
      4fa2ab9b
  18. 11 Sep, 2021 1 commit
    • Alex Xiao's avatar
      [feat] set requires_grad of output tensors of checkpointed modules properly (#787) · 482944d9
      Alex Xiao authored
      
      
      Before this commit, output tensors of checkpointed modules always
      require grad, even if they shouldn't. This commit makes it so that
      the outputs of checkpointed modules only require grad if either
      the input requires grad or if the parameters require grad.
      
      To achieve this, this commit also adds a new _unflattened_param_views
      attribute to modules being flattened. This allows the checkpointing
      to still access the parameters and check if gradients need to be
      computed.
      Co-authored-by: default avatarAlex Xiao <axiao@fb.com>
      482944d9
  19. 10 Sep, 2021 2 commits
  20. 07 Sep, 2021 1 commit
  21. 06 Sep, 2021 2 commits
  22. 05 Sep, 2021 1 commit
  23. 18 Aug, 2021 1 commit
  24. 12 Aug, 2021 4 commits