1. 15 Nov, 2021 1 commit
    • Anupam Bhatnagar's avatar
      Allow sharded grad scaler to cpu offload with FSDP (#831) · ba5785f7
      Anupam Bhatnagar authored
      * first commit
      
      * sharded scaler hitting nan assertions
      
      * adding test for sharded grad scaler without cpu offload
      
      * ddp grad scaler and fsdp sharded grad scaler test failing
      
      * removing test_output
      
      * fix no cpu offload test
      
      * changing optimizer from OSS to SGD
      
      * all tests passing, code cleanup pending
      
      * code cleanup
      
      * fix pyproject.toml
      
      * removing .isort.cfg
      
      * running isort linter
      
      * resolving isort issues
      
      * resolving black linter issue
      
      * resolving mypy issues
      
      * fix import statement
      
      * fix mypy error
      
      * modifying import statement
      
      * adding pytorch version requirement
      
      * fixing pytest skip test decorator
      
      * apply version guard for ShardedGradScaler
      
      * removing test_fsdp_grad_scaler
      
      * increasing num_epochs for ShardedGradScaler so that updates are not skipped
      
      * adding support for torch 1.8
      
      * minor edit
      
      * [skip ci] more torch 1.8 changes
      
      * parametrizing the tests
      
      * cleanup code with linters
      
      * [skip ci] update doc string
      
      * [skip ci] addressing some more comments
      ba5785f7
  2. 31 Jul, 2021 1 commit
  3. 11 Jun, 2021 1 commit
    • Pete's avatar
      Use original forward pass directly when in eval mode from within checkpoint wrapper (#709) · 370b8483
      Pete authored
      * add failing test
      
      * add fix
      
      * use 'torch.is_grad_enabled()' instead of 'module.training'
      
      * Revert "add failing test"
      
      This reverts commit 1c34242208f9b2c5fa6c8f181434c2be6d7cdbc0.
      
      * add simple test
      
      * improve test
      
      * add check for fwd_counter
      
      * revert typing/format changes
      
      * move to new test file
      
      * CHANGELOG
      
      * remove old test
      
      * fix import order
      
      * fix test to be compat with torch 1.6.0
      
      * clean up
      
      * comments
      
      * isort 🤦
      370b8483
  4. 07 May, 2021 2 commits
    • Min Xu's avatar
      [fix]: support pytorch SyncBatchNorm under AMP & checkpointing with FSDP (#659) · 6db68518
      Min Xu authored
      
      
      * [test]: add a more general test case
      
      - also rebalance the tests a bit
      
      * added missing arg
      
      * balance
      
      * better checking
      
      * balance
      
      * make test smaller and faster
      
      * make ddp results cached and enable sync_bn
      
      * clean up
      
      * fix tests
      
      * changelog
      
      * blance
      
      * fix
      
      * addressing comments
      Co-authored-by: default avatarMin Xu <min.xu@acm.org>
      6db68518
    • msbaines's avatar
      [feat] experimental.nn.SyncBatchNorm: initial commit (#662) · f0a40046
      msbaines authored
      * [feat] experimental.nn.SyncBatchNorm: initial commit
      
      Fast/simple re-implementation of SyncBatchNorm.
      
      When profiling SSL Vision, I was seeing a majority of cycles spent in
      SyncBatchNorm. With this change, I see a 10% to 20% speedup on the
      model I was profiling.
      
      When running benchmarks/experimental/sync_batchnorm.py on 8 x V100,
      I get a 6x speedup:
      
      <class 'torch.nn.modules.batchnorm.BatchNorm2d'>
      Elapsed time is  0.08709120750427246
      Elapsed time is  0.12632274627685547
      Elapsed time is  0.14095258712768555
      Elapsed time is  0.16529417037963867
      Elapsed time is  0.1419970989227295
      Elapsed time is  0.15166854858398438
      Elapsed time is  0.12000870704650879
      Elapsed time is  0.17534875869750977
      <class 'torch.nn.modules.batchnorm.SyncBatchNorm'>
      Elapsed time is  2.5087168216705322
      Elapsed time is  2.497001886367798
      Elapsed time is  2.5204885005950928
      Elapsed time is  2.526789903640747
      Elapsed time is  2.5080230236053467
      Elapsed time is  2.524489641189575
      Elapsed time is  2.513214588165283
      Elapsed time is  2.5359973907470703
      <class 'fairscale.experimental.nn.sync_batchnorm.SyncBatchNorm'>
      Elapsed time is  0.4126114845275879
      Elapsed time is  0.39051294326782227
      Elapsed time is  0.40685415267944336
      Elapsed time is  0.4159870147705078
      Elapsed time is  0.42383885383605957
      Elapsed time is  0.4080159664154053
      Elapsed time is  0.41202712059020996
      Elapsed time is  0.42400121688842773
      f0a40046
  5. 30 Apr, 2021 1 commit
  6. 28 Apr, 2021 1 commit
    • Min Xu's avatar
      [feat] save memory by using bucket buffer only in backward (#633) · a5594032
      Min Xu authored
      
      
      * [feat] save memory by using bucket buffer only in backward
      
      - this fixes bug #627
      - added documentation to clarify the buffer's cost and speed/memory
        tradeoff
      - added setup/teardown calls so that the buffer is only allocated
        during the backward pass, saving more memory for forward and stepping
        so that they can be used for things like activations.
      - added a unit test that assert the memory is in range.
      
      Comparing with DDP:
      
        1. buffer size scales with # of FSDP not model size
        2. buffer is only allocated during backward
        3. buffer is used for small tensors only to reduce overhead
        4. overlapping of compute-reduction is very different
      
      * add PR number to changelog
      
      * filled in with memory number on 1.9
      
      * addressed comments
      
      * update comments
      
      * fix for 1.6
      
      * add a todo
      Co-authored-by: default avatarMin Xu <min.xu@acm.org>
      a5594032
  7. 26 Apr, 2021 1 commit
  8. 23 Apr, 2021 1 commit
    • shuyingsunshine21's avatar
      [FSDP] relax checking root condition (#620) · d3b86d65
      shuyingsunshine21 authored
      * relax checking root condition
      
      * formatting
      
      * add unittest
      
      * add unittest to ci test list
      
      * isort for import of unittest
      
      * format black .
      
      * move test to list 1
      
      * add skip no cuda
      
      * black and isort
      d3b86d65
  9. 19 Apr, 2021 1 commit
    • Min Xu's avatar
      FSDP: fixing training with freezing weights (#614) · 24da3b11
      Min Xu authored
      
      
      * FSDP: fixing training with freezing weights
      
      - an assert is changed to catch this case correctly
      - unit test added (based on Quentin's test code) for this case and
        compare DDP and FSDP
      
      fixes: #610
      
      * added test file to list 1
      
      * Use better and simpler code as suggested by Myle
      
      * testing both methods of freezing as well
      Co-authored-by: default avatarMin Xu <min.xu@acm.org>
      24da3b11
  10. 04 Mar, 2021 1 commit
    • Min Xu's avatar
      [feat]: checkpoint and normalization (#457) · 5e64d6a7
      Min Xu authored
      * [feat]: checkpoint and normalization
      
      - added special handling of BN for track_running_stats and checkpointing
      - we test BN/LN and checkpointing
      - we test them with mixed precision
      5e64d6a7
  11. 02 Mar, 2021 2 commits
    • Myle Ott's avatar
      d2924670
    • Sean Naren's avatar
      [feat] Add context manager to FSDP for easier child module wrapping (#446) · f3359550
      Sean Naren authored
      This adds a context manager that assists in making child modules with similar defaults.
      Usage:
      ```
      from fairscale.nn.misc import enable_wrap, wrap
      
      with enable_wrap(**handleful_of_important_params):
          layer_1 = wrap(torch.nn.Linear(5, 5))
          layer_2 = wrap(torch.nn.Linear(5, 5), flatten_parameters=True) # Override parameters if you'd like
      
      # without the context manager, creates Linear layer
      layer_1 = wrap(torch.nn.Linear(5, 5))
      ```
      If not within the FSDP context, this would be a no-op. This makes it easier to annotate layers without having to copy any changes in parameters.
      f3359550
  12. 01 Mar, 2021 1 commit
    • Min Xu's avatar
      [chores]: make CI more efficient and update py39 env a bit (#447) · 5eb6b8c7
      Min Xu authored
      * [chores]: CI py39 on GPU and more efficiency
      
      * add test list files
      
      * fix
      
      * add test list files
      
      * split benchmark run into 2 runs
      
      * fix 1.8 version and balance benchmarks
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * recording tests
      
      * py39 install fix
      
      * test again
      
      * move tests
      
      * reorg tests
      
      * skip tests for torch 1.8 due to an upstream bug
      
      * removed __init__.py from tests since it confuses pytest
      
      * Revert "removed __init__.py from tests since it confuses pytest"
      
      This reverts commit 7e156ba33dfaa5ed052031780613ec0cb57a45b0.
      
      * don't include __init__ in file list
      
      * notes on __init__.py and added missing ones
      
      * fixed mypy in a test file
      
      * balance test runtime
      
      * better pip install
      
      * balance more
      
      * pip fix
      
      * balance
      
      * balance more, all test should finish within 20m now
      
      * minor license update
      
      * trying cu102
      
      * more doc and addressed Ben's comments
      
      * debugging
      
      * debugging...
      5eb6b8c7