1. 15 Jan, 2019 1 commit
  2. 14 Jan, 2019 2 commits
  3. 10 Jan, 2019 1 commit
  4. 09 Jan, 2019 2 commits
  5. 07 Jan, 2019 1 commit
  6. 05 Jan, 2019 3 commits
  7. 28 Dec, 2018 3 commits
  8. 26 Dec, 2018 2 commits
  9. 24 Dec, 2018 2 commits
    • Myle Ott's avatar
      Improve memory efficiency of FP16 optimization (#404) · 03a57dec
      Myle Ott authored
      Summary:
      Previously when training with --fp16, we stored a copy of the model parameters in FP32 for optimization, which consumed a lot of memory. An alternative is to just do the conversions to FP32 on the fly, which allows the caching allocator to reuse/save some memory.
      
      This reduces peak memory usage by ~20% with a negligible reduction in training speed (~2% slower) when training a big transformer on 8 GPUs on wmt en-de with --update-freq=16.
      
      This does not affect convergence, i.e., models will train exactly as they did before.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/404
      
      Differential Revision: D13394376
      
      Pulled By: myleott
      
      fbshipit-source-id: 2b9f808548df4782110513c9cfc9f7c6159bcbbf
      03a57dec
    • Myle Ott's avatar
      Add BufferedIterator (#419) · 0f833526
      Myle Ott authored
      Summary:
      This improves performance for datasets that load data lazily. Enabled by default since it shouldn't compromise performance for non-lazy datasets.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/419
      
      Differential Revision: D13546585
      
      Pulled By: myleott
      
      fbshipit-source-id: f6152e2047291b0d68cd7506cd772b0caafe95be
      0f833526
  10. 18 Dec, 2018 1 commit
    • Haoran Li's avatar
      data per gpu change · 9ca82a0e
      Haoran Li authored
      Summary: Avoid loading entire data set per gpu to reduce memory footprint
      
      Reviewed By: rutyrinott
      
      Differential Revision: D13163548
      
      fbshipit-source-id: 4ba717c8021ba5723d02225bae5782e2c3a18640
      9ca82a0e
  11. 11 Dec, 2018 1 commit
  12. 08 Dec, 2018 1 commit
  13. 07 Dec, 2018 2 commits
    • Myle Ott's avatar
      Add --fp16-scale-tolerance (#397) · 03ef3ab8
      Myle Ott authored
      Summary:
      Let's only decrease the loss scale if a large enough percentage of batches overflow.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/397
      
      Differential Revision: D13355159
      
      Pulled By: myleott
      
      fbshipit-source-id: e17dde73d34a639519b4348c013fdd19d2b314e6
      03ef3ab8
    • Halil Akin's avatar
      Take a dummy train step under OOM to keep multiprocessing in sync · 6c006a34
      Halil Akin authored
      Summary: This is not a guaranteed solution (since processes may still get out of sync if OOM happens after an all_gather/all_reduce has been done) - but should still make multiprocessing training more robust in practice since it seems we usually OOM early enough.
      
      Reviewed By: myleott
      
      Differential Revision: D13086018
      
      fbshipit-source-id: feb1b01c2eb8818797cfdabc0faac8056ba1b4ee
      6c006a34
  14. 06 Dec, 2018 4 commits
  15. 04 Dec, 2018 1 commit
  16. 30 Nov, 2018 1 commit
  17. 29 Nov, 2018 2 commits
  18. 27 Nov, 2018 2 commits
  19. 26 Nov, 2018 2 commits
  20. 19 Nov, 2018 1 commit
    • Halil Akin's avatar
      Protect against failures in case of OOMs · a442244d
      Halil Akin authored
      Summary: Fixing some distributed failures that happen when OOMs are observed.
      
      Reviewed By: myleott
      
      Differential Revision: D13121054
      
      fbshipit-source-id: f71a0a695332acbaa1797e89887b8b7c7ddaa727
      a442244d
  21. 18 Nov, 2018 2 commits
  22. 17 Nov, 2018 1 commit
  23. 16 Nov, 2018 1 commit
    • Haoran Li's avatar
      make dictionary optional · a4e34985
      Haoran Li authored
      Reviewed By: jingfeidu
      
      Differential Revision: D13104360
      
      fbshipit-source-id: 9636f5ee2721818f98b33af559fa24292534a72f
      a4e34985
  24. 14 Nov, 2018 1 commit