1. 08 Aug, 2022 1 commit
  2. 29 Jun, 2022 1 commit
  3. 12 Jun, 2022 1 commit
  4. 26 May, 2022 1 commit
  5. 02 May, 2022 1 commit
    • Paul Johnson's avatar
      [FSDP] ssd_offload fixing backward path (grad_fn) for SsdFlatParameter and... · 51b53ddb
      Paul Johnson authored
      [FSDP] ssd_offload fixing backward path (grad_fn) for SsdFlatParameter and SsdFlatParameterView (#974)
      
      * [FSDP] fixing backward path for SsdFlatParameter and SsdFlatParameterView when overriding .data
      
      * Get ssd_offload unit tests passing
      
      * [FSDP] get all test_fsdp_offload tests passing w/ ssd_offload on
      
      * Update changelog
      51b53ddb
  6. 26 Apr, 2022 1 commit
  7. 06 Apr, 2022 1 commit
  8. 14 Feb, 2022 1 commit
    • Min Xu's avatar
      [chore] [cleanup]: pytest, pytorch new versions, fix tests (#933) · fae29959
      Min Xu authored
      
      
      * update pytest versions
      
      * [test] test related changes
      
      - upgrade to newer pytorch versions
      - added function to make test more deterministic on A100 and TF32
      - fixed some tests so that they are correctly skipped on a single GPU system
      
      * more fixes
      
      * formatting overly long lines
      
      * format
      
      * better test without trigger a warning
      
      * fix an optim state bug with newer pytorch
      
      - adam optimizer seems to return "step" as a singleton tensor now in the
      nightly build
      - this fixes it assumeing non-tensor value can still be loaded back by
      the optimizer
      
      * improve oss.py
      
      - use min_loss for regression checking is a bit more reliable
      - also increased the num epochs from 10 to 12
      
      * small oss.py fix
      
      * Update fairscale/nn/data_parallel/fully_sharded_data_parallel.py
      Co-authored-by: default avatarMin Xu <min.xu.public@gmail.com>
      fae29959
  9. 28 Jan, 2022 1 commit
  10. 07 Jan, 2022 1 commit
    • tmarkstrum's avatar
      [FSDP] Enable FSDP reduce scatter overlap (#897) · 0a526bcb
      tmarkstrum authored
      * enable reduce scatter overlap with other operations
      
      * fixed unit tests and added docstrings for the new parameters for fsdp
      
      * fixed more unit tests
      
      * fixed unit tests
      
      * avoided the pickle error on process_group_reduce_scatter
      
      * removed an unnecessary parameter in unit tests
      
      * remove unnecessary prints
      
      * fixed the docstring
      
      * skipped the test_offload unit test because this unit test failed in the main branch
      
      * removed the enable_reduce_scatter_overlap API parameter
      
      * added doc string for the defualt value of process_group_reduce_scatter parameter
      
      * fixed a syntax bug
      
      * fixed a bug which cause unitest failure
      
      * removed the all_gather in the ProcessGroupName enum
      
      * added more comment
      
      * changed the default value of process_group_reduce_scatter from None to ProcessGroupName.reduce_scatter
      0a526bcb
  11. 05 Jan, 2022 1 commit
    • Paul Johnson's avatar
      Enabling ssd_offload training basic tests. (#887) · c5e471bc
      Paul Johnson authored
      * Enabling ssd_offload training and test via tests/nn/data_parallel/test_fsdp_offload.py.
      * Removed unused classes: SsdBuffer, SsdTensorHandleView, SsdParameter, SsdTensor
      * Enhance test coverage of test_ssd_offloading_train_flatten_params_wrapper
      * Modifications from PR #887 review comments.
      * Update Changelog
      c5e471bc
  12. 13 Dec, 2021 1 commit
    • Min Xu's avatar
      [feat] support eval in mevo (#884) · 56add6d5
      Min Xu authored
      - During eval, we will fallback to just output projection without fusing
      - added unit test to ensure the shape is correct
      56add6d5
  13. 12 Nov, 2021 1 commit
    • Anupam Bhatnagar's avatar
      Setup pre-commit github action and apply pre-commit to all files (#849) · 7d7edf6d
      Anupam Bhatnagar authored
      * adding pre-commit files
      
      * applying pre-commit to all files
      
      * adding no-strict-optional argument to mypy in circle ci config
      
      * fix typo
      
      * updating python versions
      
      * [skip ci] remove extra args
      
      * adding python 3.9
      
      * [skip ci] set pre-commit version in requirements-dev.txt
      
      * set CACHE_VERSION
      
      * move linters from circleci to github actions
      
      * update python version
      
      * update python version in benchmarks_2
      
      * moving to python 3.9.7
      7d7edf6d
  14. 08 Nov, 2021 2 commits
  15. 05 Nov, 2021 1 commit
    • Min Xu's avatar
      [feat] experimental MEVO layer (#840) · 8347c1a2
      Min Xu authored
      
      
      * [feat] MEVO kernel
      
      - initial import from min/softmax and min/testing branches
      - need to rename and further cleanup
      
      * only test with newer pytorch
      
      * renamed and added comments and code cleanup
      
      * rename and reduce test memory
      
      * testing
      
      * minor fixing
      
      * fixing
      
      * more fix
      
      * changelog
      
      * more 1.7 and 1.8 paper cuts
      
      * remove dead code
      
      * addressed Benjamin's comments
      
      * addressed more comments
      Co-authored-by: default avatarMin Xu <min.xu.public@gmail.com>
      8347c1a2
  16. 01 Nov, 2021 1 commit
    • anj-s's avatar
      [feature] Add the low level SSD APIs (#829) · a9fcaa28
      anj-s authored
      * add doc strings
      
      * add lower level SSD APIs and tests
      
      * add the test to the list to be run
      
      * remove unused imports
      
      * more doc string changes
      
      * fix lint errors
      a9fcaa28
  17. 27 Oct, 2021 1 commit
  18. 22 Oct, 2021 1 commit
    • Eugen Hotaj's avatar
      Extend auto shard capabilities to work around torch.fx edge cases. (#817) · 7bdf50a3
      Eugen Hotaj authored
      auto_shard.py currently uses torch.fx to create a symbolic DAG of
      operations and linearizes that DAG into an nn.Sequential so it can later
      be used for model offloading. This works in most cases but runs into
      issues for certain eager mode features, such as dynamic conditionals,
      shape-dependent computation, etc.
      
      This PR extends auto_shard.py to first run a preprocessing step which wraps
      any nn.Module which cannot be traced through. It adds a test for dynamic
      conditionals and updates existing failing test code.
      
      There are some immediate extensions to this approach which are marked as
      TODO in the code.
      7bdf50a3
  19. 21 Oct, 2021 1 commit
    • anj-s's avatar
      [chore] Update the PyTorch version that we run CPU tests with (#809) · 11a24161
      anj-s authored
      * update python version for cpu tess
      
      * run CPU tests with updated PyTorch version
      
      * update nightly and test PyTorch versions
      
      * skip failing multiprocess pipe test
      
      * always skip test
      
      * always skip test
      
      * always skip test
      
      * lint error
      
      * skip unsupported versions
      
      * improve skip message
      
      * lint errors
      11a24161
  20. 12 Sep, 2021 1 commit
    • Darryl Barnhart's avatar
      [fix] FSDP intra-backwards gradient accumulation. (#784) · 4fa2ab9b
      Darryl Barnhart authored
      * [fix] FSDP intra-backwards gradient accumulation.
      
      Ensure gradient reduction accumulates into the unsharded gradient tensor
      within a backwards pass. This matters when an FSDP module is called
      multiple times within a forward pass, and reduction is _not_ deferred
      using activation checkpoint forward counters, bucketing or some other
      mechanism.
      
      Closes #780
      
      * [refactor] Remove forward counters. Comments.
      
      Removed forward counters from the activation checkpointing utility, now
      that FSDP does not require them for correct operation. Add more detailed
      comment about memory usage behaviour with gradient reduction.
      
      * [refactor] Delete deprecated forward counter usage.
      
      * [refactor] Add state assertion as end of pre-backward hook.
      4fa2ab9b
  21. 28 Jun, 2021 1 commit
  22. 26 Jun, 2021 1 commit
  23. 25 Jun, 2021 2 commits
  24. 22 Jun, 2021 1 commit
    • Pavel Belevich's avatar
      Update torch to 1.9.0 release (#717) · 1cc4c837
      Pavel Belevich authored
      * Update torch to 1.9.0.dev20210614+cu102
      
      * Update config.yml
      
      * Update config.yml
      
      * Update setup.py
      
      * Update config.yml
      
      * Update config.yml
      
      * Update config.yml
      
      * Update config.yml
      1cc4c837
  25. 11 Jun, 2021 1 commit
    • anj-s's avatar
      [Offload][feature] Add auto shard functionality to remove requirement of... · cbeda830
      anj-s authored
      [Offload][feature] Add auto shard functionality to remove requirement of nn.Sequential models. (#695)
      
      * auto wrap functionality
      
      * lint and doc strings
      
      * fix lint errors
      
      * lint errors and version skips
      
      * remove mypy checking and add conditional import
      
      * another math.prod instance
      
      * another import fix
      
      * address comments
      
      * lint errors
      
      * address comments
      
      * fix lint errors
      
      * add placeholder nodes to tracker list
      cbeda830
  26. 27 May, 2021 1 commit
  27. 14 May, 2021 1 commit
  28. 07 May, 2021 1 commit
    • msbaines's avatar
      [feat] experimental.nn.SyncBatchNorm: initial commit (#662) · f0a40046
      msbaines authored
      * [feat] experimental.nn.SyncBatchNorm: initial commit
      
      Fast/simple re-implementation of SyncBatchNorm.
      
      When profiling SSL Vision, I was seeing a majority of cycles spent in
      SyncBatchNorm. With this change, I see a 10% to 20% speedup on the
      model I was profiling.
      
      When running benchmarks/experimental/sync_batchnorm.py on 8 x V100,
      I get a 6x speedup:
      
      <class 'torch.nn.modules.batchnorm.BatchNorm2d'>
      Elapsed time is  0.08709120750427246
      Elapsed time is  0.12632274627685547
      Elapsed time is  0.14095258712768555
      Elapsed time is  0.16529417037963867
      Elapsed time is  0.1419970989227295
      Elapsed time is  0.15166854858398438
      Elapsed time is  0.12000870704650879
      Elapsed time is  0.17534875869750977
      <class 'torch.nn.modules.batchnorm.SyncBatchNorm'>
      Elapsed time is  2.5087168216705322
      Elapsed time is  2.497001886367798
      Elapsed time is  2.5204885005950928
      Elapsed time is  2.526789903640747
      Elapsed time is  2.5080230236053467
      Elapsed time is  2.524489641189575
      Elapsed time is  2.513214588165283
      Elapsed time is  2.5359973907470703
      <class 'fairscale.experimental.nn.sync_batchnorm.SyncBatchNorm'>
      Elapsed time is  0.4126114845275879
      Elapsed time is  0.39051294326782227
      Elapsed time is  0.40685415267944336
      Elapsed time is  0.4159870147705078
      Elapsed time is  0.42383885383605957
      Elapsed time is  0.4080159664154053
      Elapsed time is  0.41202712059020996
      Elapsed time is  0.42400121688842773
      f0a40046
  29. 28 Apr, 2021 1 commit
    • Mehdi Mirzazadeh's avatar
      adding auto graph generation for distributed pipeline (#615) · bdc0581b
      Mehdi Mirzazadeh authored
      * adding auto graph generation for distributed pipeline
      
      * ignore trace.py for my for now, since it needs pytorch 1.8
      
      * fixing tests
      
      * simplifying graph api
      
      * remove unused debug utilities
      
      * use inspect to find argument lists
      
      * use sharded linear layer
      
      * flkae8
      
      * comment
      
      * polishing
      
      * polishing
      bdc0581b
  30. 15 Apr, 2021 1 commit
  31. 13 Apr, 2021 1 commit
  32. 31 Mar, 2021 2 commits
  33. 29 Mar, 2021 1 commit
  34. 28 Mar, 2021 1 commit
  35. 19 Mar, 2021 2 commits
  36. 04 Mar, 2021 1 commit