1. 19 Apr, 2021 1 commit
    • Min Xu's avatar
      FSDP: fixing training with freezing weights (#614) · 24da3b11
      Min Xu authored
      
      
      * FSDP: fixing training with freezing weights
      
      - an assert is changed to catch this case correctly
      - unit test added (based on Quentin's test code) for this case and
        compare DDP and FSDP
      
      fixes: #610
      
      * added test file to list 1
      
      * Use better and simpler code as suggested by Myle
      
      * testing both methods of freezing as well
      Co-authored-by: default avatarMin Xu <min.xu@acm.org>
      24da3b11
  2. 15 Apr, 2021 1 commit
  3. 13 Apr, 2021 3 commits
  4. 08 Apr, 2021 1 commit
  5. 07 Apr, 2021 2 commits
  6. 06 Apr, 2021 1 commit
  7. 05 Apr, 2021 1 commit
  8. 04 Apr, 2021 3 commits
  9. 02 Apr, 2021 1 commit
  10. 01 Apr, 2021 1 commit
  11. 31 Mar, 2021 4 commits
    • msbaines's avatar
    • anj-s's avatar
      [offload] Audit OffloadModel API, add error messages and remove redundant code path. (#557) · 34384e1b
      anj-s authored
      * renaming/adding error messages
      
      * address comments
      
      * address comments
      
      * add more comments
      
      * add more comments
      34384e1b
    • Min Xu's avatar
      [fix] FSDP: disable single rank process group for auto_wrap_bn and fixed mixed... · a0458b98
      Min Xu authored
      [fix] FSDP: disable single rank process group for auto_wrap_bn and fixed mixed precision regnet test (#556)
      
      * [fix] disable single rank process group for auto_wrap_bn
      
      - beefed up unit test with regnet-like model
      - found that single-rank process group is causing problem
      - disabled it to enable convergence tests on the vissl side
      - use `raise e from None` to get a better assertion output
        in testing.py.
      
      * [test] fix regnet test for ddp+mixed_precision
      
      - need AMP context in FSDP
      - workaround different between ddp & fsdp when bias=True
      - fixed a bug in input data generation that caused different ranks have
        the same data with wrong iteration count.
      - added TODO for need a better loss and grad_scaler and reduced
        iters so there is no nan.
      - added a (disabled) debugging code
      
      * lint
      
      * lint
      
      * add scaler
      
      * lint
      
      * scaler
      
      * add a real loss
      
      * seeding in the ranks
      
      * blance tests
      
      * run AMP DDP==FSDP test only on cuda version 11 and up
      
      * add relu inplace and comment
      
      * make wrap_bn covers more cases in full precision mode
      a0458b98
    • msbaines's avatar
      acb9ef00
  12. 30 Mar, 2021 1 commit
  13. 29 Mar, 2021 1 commit
  14. 28 Mar, 2021 1 commit
  15. 26 Mar, 2021 1 commit
  16. 25 Mar, 2021 2 commits
  17. 22 Mar, 2021 1 commit
  18. 20 Mar, 2021 1 commit
  19. 19 Mar, 2021 3 commits
  20. 18 Mar, 2021 5 commits
  21. 17 Mar, 2021 2 commits
  22. 15 Mar, 2021 1 commit
  23. 12 Mar, 2021 2 commits