1. 17 Sep, 2020 6 commits
  2. 16 Sep, 2020 2 commits
  3. 15 Sep, 2020 2 commits
  4. 14 Sep, 2020 1 commit
  5. 12 Sep, 2020 1 commit
  6. 11 Sep, 2020 1 commit
  7. 10 Sep, 2020 3 commits
  8. 09 Sep, 2020 7 commits
  9. 08 Sep, 2020 1 commit
    • Benjamin Lefaudeux's avatar
      [feat] OSS: Sync all attributes (#67) · 5a268b25
      Benjamin Lefaudeux authored
      Make sure that all attributes (not just LR) are in sync in between the OSS.param_groups and the actual wrapped optimizer. Some frameworks make it possible to alter any attribute on a scheduled basis, which proves useful depending on the optimizer, so the keys need to be generically supported (not just "lr"). Not syncing these attributes is a worst case scenario, since these adjustments are silently not propagated, fixing that. 
      5a268b25
  10. 04 Sep, 2020 1 commit
  11. 03 Sep, 2020 3 commits
    • Benjamin Lefaudeux's avatar
      [feat] Add a memory usage regression test to the OSS benchmark (#62) · ee38e1e0
      Benjamin Lefaudeux authored
      * Aligning the optimizer state dict with what PyTorch expects
      
      * Adding a check on the dict keys, ensure that `state` and `param_groups` are there
      
      * after installing the specific isort, black and all, one liner to please the linter..
      
      * Adding some measurement of the memory consumption while training + checkpointing
      
      * mandatory lintfix commit
      
      * brainfart, reset the memory use counter at the beginning of the training in case two of them are run in a row
      
      * move reset stats call, hotfix
      
      * move the optimizer to rmsprop, more stateful and still used in CV
      
      * trying to figure out a sigsev in circleci
      ee38e1e0
    • Jun Ru Anderson's avatar
      Add grad scaler (#48) · b6a5e634
      Jun Ru Anderson authored
      
      
      Add GradScaler to Fairscale, subclassing PyTorch's GradScaler. Use GradScaler in the pipe benchmark; though it is not needed in this case, it is a good example of how to use gradient scaling for larger models that do require gradient scaling in order to converge.
      Co-authored-by: default avatarJun Ru Anderson <andersonic@fb.com>
      b6a5e634
    • Benjamin Lefaudeux's avatar
      [fix] OSS pytorch-compliant state dict (#61) · 1d1d15ea
      Benjamin Lefaudeux authored
      * Aligning the optimizer state dict with what PyTorch expects
      
      * Adding a check on the dict keys, ensure that `state` and `param_groups` are there
      
      * after installing the specific isort, black and all, one liner to please the linter..
      1d1d15ea
  12. 28 Aug, 2020 4 commits
    • msbaines's avatar
      [chore] create v0.0.2 (#59) · 4488e17c
      msbaines authored
      4488e17c
    • Jun Ru Anderson's avatar
      [test] specify chunks for pipe/transformer benchmark (#52) · d1d74413
      Jun Ru Anderson authored
      
      
      * specify chunks for pipe/transformer benchmark
      
      Set chunks to be equal to len(balance) for pipe/transformer benchmark. Will update words per second and memory usage checks in next commit (must test on CircleCI to find appropriate values)
      
      * change benchmark words per second and memory usage
      
      Did six runs for words-per-second, with results: 9144.40, 9163.91, 9993.01, 9082.82, 9155.09, 9000.67
      Peak allocated bytes per device (which does not change between runs) were 193206272, 645632, 562688, 92688384 for devices 0, 1, 2 and 3, respectively
      
      * increase batch size
      
      batch size was small enough that the GPU's computing power was not the bottleneck, slowing training and specifically making more chunks slower. Increasing batch size has therefore increased training speed
      
      * update benchmark numbers
      
      ran six times, with wps 36917.44, 36797.65, 37006.03, 36872.84, 37129.31, 37003.31 and peak allocated bytes 4061909504, 4050944, 10427392, 2031824896 for devices 0,1,2 and 3 respectively.
      Co-authored-by: default avatarJun Ru Anderson <andersonic@fb.com>
      d1d74413
    • msbaines's avatar
      [fix] optim/oss: work correctly with LRScheduler (#58) · ab32cb7d
      msbaines authored
      * [fix] optim/oss: work correctly with LRScheduler
      
      Sync lr before every step and before consolidate.
      ab32cb7d
    • Min Xu's avatar
      [fix] fix eval for oss_ddp (#55) · 8c8eb8e8
      Min Xu authored
      - added train(mode) method to be aware of eval mode
      8c8eb8e8
  13. 27 Aug, 2020 4 commits
  14. 22 Aug, 2020 1 commit
  15. 21 Aug, 2020 2 commits
    • Benjamin Lefaudeux's avatar
      [feat] Simple macro OSS benchmark (#47) · 46c3776b
      Benjamin Lefaudeux authored
      
      
      * initial commit, dummy training loop, pure pytorch but not DDP
      
      * probably slightly broken, but rough DDP benchmark run
      
      * adding the torchvision requirement for testing
      
      * brainfart
      
      * reduce the loss, do something slightly distributed
      
      * Some cleanup, distributing the training on two GPUs
      
      * some cleanup + adding a vanilla run, still not good to go
      
      * less silly defaults, gtg for a start I think
      
      * smaller batch to fit the smaller gpus used in the circleci rigs
      
      * Adding some options for the benchmark, and regression testing
      
      * [test] set torch seed for Adam tests (#49)
      
      Set the torch seed for tests. xfail mixed precision and memory-efficient mixed-precision state_dict tests due to their states being cast to FP16 and back to FP32 during load_state_dict.
      Co-authored-by: default avatarJun Ru Anderson <andersonic@fb.com>
      
      * linting, I really need to automate this isort insanity
      Co-authored-by: default avatarJun Ru Anderson <33384298+andersonic@users.noreply.github.com>
      Co-authored-by: default avatarJun Ru Anderson <andersonic@fb.com>
      46c3776b
    • Jun Ru Anderson's avatar
      [test] set torch seed for Adam tests (#49) · 0e8c2a96
      Jun Ru Anderson authored
      
      
      Set the torch seed for tests. xfail mixed precision and memory-efficient mixed-precision state_dict tests due to their states being cast to FP16 and back to FP32 during load_state_dict.
      Co-authored-by: default avatarJun Ru Anderson <andersonic@fb.com>
      0e8c2a96
  16. 20 Aug, 2020 1 commit