1. 30 Mar, 2021 1 commit
  2. 26 Mar, 2021 1 commit
  3. 25 Mar, 2021 2 commits
  4. 22 Mar, 2021 1 commit
  5. 20 Mar, 2021 1 commit
  6. 19 Mar, 2021 2 commits
  7. 18 Mar, 2021 4 commits
  8. 17 Mar, 2021 1 commit
  9. 12 Mar, 2021 2 commits
  10. 11 Mar, 2021 1 commit
  11. 09 Mar, 2021 2 commits
  12. 08 Mar, 2021 1 commit
    • Min Xu's avatar
      [fix]: handle inputs with containers in mixed precision (#486) · 2e9a14e7
      Min Xu authored
      * [fix]: handle inputs with containers
      
      - this is an issue surfaces by vissl as well
      - fix seems to be super simple
      - also cleaned up two tests with respect to multiple such tests
        running back to back (they don't do that presently)
      
      * cleanup
      
      * fix
      
      * lint
      2e9a14e7
  13. 06 Mar, 2021 1 commit
  14. 05 Mar, 2021 2 commits
  15. 04 Mar, 2021 2 commits
  16. 03 Mar, 2021 2 commits
  17. 02 Mar, 2021 2 commits
    • Myle Ott's avatar
      d2924670
    • Sean Naren's avatar
      [feat] Add context manager to FSDP for easier child module wrapping (#446) · f3359550
      Sean Naren authored
      This adds a context manager that assists in making child modules with similar defaults.
      Usage:
      ```
      from fairscale.nn.misc import enable_wrap, wrap
      
      with enable_wrap(**handleful_of_important_params):
          layer_1 = wrap(torch.nn.Linear(5, 5))
          layer_2 = wrap(torch.nn.Linear(5, 5), flatten_parameters=True) # Override parameters if you'd like
      
      # without the context manager, creates Linear layer
      layer_1 = wrap(torch.nn.Linear(5, 5))
      ```
      If not within the FSDP context, this would be a no-op. This makes it easier to annotate layers without having to copy any changes in parameters.
      f3359550
  18. 01 Mar, 2021 2 commits
    • Min Xu's avatar
      [chores]: make CI more efficient and update py39 env a bit (#447) · 5eb6b8c7
      Min Xu authored
      * [chores]: CI py39 on GPU and more efficiency
      
      * add test list files
      
      * fix
      
      * add test list files
      
      * split benchmark run into 2 runs
      
      * fix 1.8 version and balance benchmarks
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * recording tests
      
      * py39 install fix
      
      * test again
      
      * move tests
      
      * reorg tests
      
      * skip tests for torch 1.8 due to an upstream bug
      
      * removed __init__.py from tests since it confuses pytest
      
      * Revert "removed __init__.py from tests since it confuses pytest"
      
      This reverts commit 7e156ba33dfaa5ed052031780613ec0cb57a45b0.
      
      * don't include __init__ in file list
      
      * notes on __init__.py and added missing ones
      
      * fixed mypy in a test file
      
      * balance test runtime
      
      * better pip install
      
      * balance more
      
      * pip fix
      
      * balance
      
      * balance more, all test should finish within 20m now
      
      * minor license update
      
      * trying cu102
      
      * more doc and addressed Ben's comments
      
      * debugging
      
      * debugging
      
      * better capture the errors
      
      * debugging
      
      * fix pyenv command
      
      * add universe repo
      
      * update to cuda 11 for 171
      
      * add a test file, improved the checking script
      5eb6b8c7
    • Min Xu's avatar
      [test] FSDP: add the failing test for #421 (#453) · 5ecac15a
      Min Xu authored
      
      
      * [test] FSDP: add the failing test for #421
      
      * skip on 1.5
      
      * better skipping
      
      * Update tests/nn/data_parallel/test_fsdp_grad_scaler.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      5ecac15a
  19. 27 Feb, 2021 1 commit
  20. 26 Feb, 2021 3 commits
  21. 25 Feb, 2021 2 commits
  22. 24 Feb, 2021 1 commit
  23. 23 Feb, 2021 3 commits
    • Min Xu's avatar
      [test]: add peak mem in checkpoint test (#415) · 4b5b4d3d
      Min Xu authored
      * [test]: add peak mem in checkpoint test
      
      * more debugging
      
      * new test
      
      * more fix
      
      * better collection of debug in case of future failures
      
      * update the comment
      
      * typo
      
      * comment
      
      * clarify
      
      * better wording
      4b5b4d3d
    • Benjamin Lefaudeux's avatar
      [perf][ShardedDDP] fp16 gradient reduce (#411) · d52d2186
      Benjamin Lefaudeux authored
      * POC, testing against the DDP comm hook when available
      * docs, adding a reference to DDP's compress hook
      * updating changelog, prep for v0.1.8 release
      d52d2186
    • Min Xu's avatar
      [bug]: not all CUDA memory is freed when model is deleted (#412) · e3035933
      Min Xu authored
      * [bug]: not all CUDA memory is freed when model is deleted
      
      * fixed memory leak
      
      - without this, peak memory will be high when more than one model
        is trained (i.e. first model leave staff around pushing up the
        peak memory when the second model runs)
      
      * addressed comments
      
      * fix
      
      * changelog
      e3035933