1. 29 Jan, 2021 2 commits
  2. 28 Jan, 2021 1 commit
    • Min Xu's avatar
      [test]: test adascale with oss (#328) · fa11d338
      Min Xu authored
      * [test]: test adascale with oss
      
      * minor fix
      
      * add a small comment
      
      * refactor: moved find_tensor_by_shape
      
      * refactor: move test golden data into its own module
      
      * refactor: simplied the train function
      
      * refactor: added comments as suggested
      fa11d338
  3. 27 Jan, 2021 2 commits
  4. 23 Jan, 2021 1 commit
  5. 21 Jan, 2021 3 commits
  6. 20 Jan, 2021 1 commit
  7. 15 Jan, 2021 1 commit
  8. 11 Jan, 2021 1 commit
  9. 08 Jan, 2021 3 commits
  10. 05 Jan, 2021 1 commit
    • Benjamin Lefaudeux's avatar
      [fix] Flaky tests (#283) · 79365ee6
      Benjamin Lefaudeux authored
      * adding the pytest timeout plugin to properly root out hanging tests
      * removing redundant code, slightly more reasonable timeout, works on single cuda
      * finding the root bug for some of the cpu hangs, rpc init
      * propagating all the rpc init test changes to the pipe and model parallel tests
      79365ee6
  11. 04 Jan, 2021 1 commit
    • Min Xu's avatar
      [feat] sync adascale from internal repo, support add_param_group (#266) · 3932a1f6
      Min Xu authored
      * [feat] sync adascale from internal repo
      
      - tbd
      
      testing: tbd
      
      * Update argument document of __init__
      
      * update documentation around set_num_gradients_to_accumulate
      
      * added checking code for proper API calling places
      
      * rename internal APIs to make them internal
      
      * updated changelog
      
      * added support for add_param_group and its unit test
      
      * added unit test for set_num_gradients_to_accumulate
      
      * added debias_ewma unit test
      
      * fixed test_set_num_gradients_to_accumulate (need zero_grad() call)
      
      * added missing zero_grad() to test_lr_scheduler
      
      * fixed test_add_param_group with respect to optim.zero_grad()
      
      * added test_gradient_value
      
      * added test_scale_not_equal_default for scale != world_size * grad_accum
      
      * added test_unhook()
      
      * removed print statements
      
      * fixed a typo
      
      * addressed Ben's comment
      3932a1f6
  12. 02 Jan, 2021 1 commit
  13. 30 Dec, 2020 1 commit
  14. 29 Dec, 2020 2 commits
  15. 28 Dec, 2020 1 commit
  16. 22 Dec, 2020 1 commit
    • Benjamin Lefaudeux's avatar
      [OSS] Balance the trainable params only (#262) · c386e937
      Benjamin Lefaudeux authored
      * fix, one liner
      
      * adjust so that frozen trunks get spread still, even if this should have little consequences
      
      * removing dead code, hopeful unit test fix
      
      * now with some linting..
      
      * adding a proper unit test case
      c386e937
  17. 19 Dec, 2020 1 commit
  18. 16 Dec, 2020 1 commit
    • Min Xu's avatar
      [feat]: AdaScale work with lr_scheduler and tests, examples (#229) · d65cd838
      Min Xu authored
      * [doc]: AdaScale example and notes
      
      * formatted notes correctly as suggested by Benjamin
      
      * added feature and unit test to make sure lr_scheduler works
      
      * update the example with lr_scheduler
      
      * fixed doc with "make html"
      
      * addressed Mike's suggestions
      d65cd838
  19. 14 Dec, 2020 1 commit
  20. 10 Dec, 2020 1 commit
  21. 06 Dec, 2020 1 commit
  22. 04 Dec, 2020 1 commit
  23. 03 Dec, 2020 1 commit
    • Min Xu's avatar
      [feat] AdaScale: Gradient Accumulation and Add PyTest unit tests (#202) · ce5860ea
      Min Xu authored
      * added AdaScale to README
      
      * [adascale] added gradient accumulation
      
      - added gradient accumulation
      - tested with cifar full trainings with different value of accumulation
      and verified the full accuracy is obtained
      - also removed the patch optimize flag until we need it
      
      * [adascale] adding pytest
      
      - added basic and ddp tests and grad_accum
      - closes #195
      
      * added changelog
      
      * added ddp grad_accum test
      
      * moved ddp and non-ddp tests into separate files
      
      * added checkpoint test
      
      * more doc
      
      * addressed Mike's comments
      ce5860ea
  24. 01 Dec, 2020 2 commits
  25. 21 Nov, 2020 1 commit
    • Benjamin Lefaudeux's avatar
      [feat] ShardedDataParallel with autoreduce (#157) · ad933b34
      Benjamin Lefaudeux authored
      * rewrite using autograd and Variable execution queue to make the reduce automatic
      * share buckets with OSS to remove duplication
      * some speed still likely on the table since the speed vs. bucketing does not match expectations, could be a follow up
      ad933b34
  26. 18 Nov, 2020 1 commit
  27. 16 Nov, 2020 1 commit
  28. 11 Nov, 2020 2 commits
  29. 10 Nov, 2020 1 commit
    • Tom Birch's avatar
      Single-process control via PipeRPCWrapper (#156) · 5d4f50fb
      Tom Birch authored
      Adds support for:
      * Reused layers (e.g. for weight sharing)
      * Lazily-constructed layers
      * Single-process control via PipeRPCWrapper
      * PipelineStyle.AsyncScheudle, which lays the foundation for asynchronous pipeline work by introducing an event loop for each rank/worker to process either activations or gradients as they arrive
      
      Also added examples for multi-process and PipeRPCWrapper
      5d4f50fb
  30. 06 Nov, 2020 1 commit
  31. 30 Oct, 2020 1 commit