"vscode:/vscode.git/clone" did not exist on "39f6abe947f7fcb008c7a1f823f815f47d6a76bf"
  1. 07 Jan, 2019 1 commit
  2. 05 Jan, 2019 3 commits
  3. 28 Dec, 2018 3 commits
  4. 26 Dec, 2018 2 commits
  5. 24 Dec, 2018 2 commits
    • Myle Ott's avatar
      Improve memory efficiency of FP16 optimization (#404) · 03a57dec
      Myle Ott authored
      Summary:
      Previously when training with --fp16, we stored a copy of the model parameters in FP32 for optimization, which consumed a lot of memory. An alternative is to just do the conversions to FP32 on the fly, which allows the caching allocator to reuse/save some memory.
      
      This reduces peak memory usage by ~20% with a negligible reduction in training speed (~2% slower) when training a big transformer on 8 GPUs on wmt en-de with --update-freq=16.
      
      This does not affect convergence, i.e., models will train exactly as they did before.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/404
      
      Differential Revision: D13394376
      
      Pulled By: myleott
      
      fbshipit-source-id: 2b9f808548df4782110513c9cfc9f7c6159bcbbf
      03a57dec
    • Myle Ott's avatar
      Add BufferedIterator (#419) · 0f833526
      Myle Ott authored
      Summary:
      This improves performance for datasets that load data lazily. Enabled by default since it shouldn't compromise performance for non-lazy datasets.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/419
      
      Differential Revision: D13546585
      
      Pulled By: myleott
      
      fbshipit-source-id: f6152e2047291b0d68cd7506cd772b0caafe95be
      0f833526
  6. 18 Dec, 2018 1 commit
    • Haoran Li's avatar
      data per gpu change · 9ca82a0e
      Haoran Li authored
      Summary: Avoid loading entire data set per gpu to reduce memory footprint
      
      Reviewed By: rutyrinott
      
      Differential Revision: D13163548
      
      fbshipit-source-id: 4ba717c8021ba5723d02225bae5782e2c3a18640
      9ca82a0e
  7. 11 Dec, 2018 1 commit
  8. 08 Dec, 2018 1 commit
  9. 07 Dec, 2018 2 commits
    • Myle Ott's avatar
      Add --fp16-scale-tolerance (#397) · 03ef3ab8
      Myle Ott authored
      Summary:
      Let's only decrease the loss scale if a large enough percentage of batches overflow.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/397
      
      Differential Revision: D13355159
      
      Pulled By: myleott
      
      fbshipit-source-id: e17dde73d34a639519b4348c013fdd19d2b314e6
      03ef3ab8
    • Halil Akin's avatar
      Take a dummy train step under OOM to keep multiprocessing in sync · 6c006a34
      Halil Akin authored
      Summary: This is not a guaranteed solution (since processes may still get out of sync if OOM happens after an all_gather/all_reduce has been done) - but should still make multiprocessing training more robust in practice since it seems we usually OOM early enough.
      
      Reviewed By: myleott
      
      Differential Revision: D13086018
      
      fbshipit-source-id: feb1b01c2eb8818797cfdabc0faac8056ba1b4ee
      6c006a34
  10. 06 Dec, 2018 4 commits
  11. 04 Dec, 2018 1 commit
  12. 30 Nov, 2018 1 commit
  13. 29 Nov, 2018 2 commits
  14. 27 Nov, 2018 2 commits
  15. 26 Nov, 2018 2 commits
  16. 19 Nov, 2018 1 commit
    • Halil Akin's avatar
      Protect against failures in case of OOMs · a442244d
      Halil Akin authored
      Summary: Fixing some distributed failures that happen when OOMs are observed.
      
      Reviewed By: myleott
      
      Differential Revision: D13121054
      
      fbshipit-source-id: f71a0a695332acbaa1797e89887b8b7c7ddaa727
      a442244d
  17. 18 Nov, 2018 2 commits
  18. 17 Nov, 2018 1 commit
  19. 16 Nov, 2018 1 commit
    • Haoran Li's avatar
      make dictionary optional · a4e34985
      Haoran Li authored
      Reviewed By: jingfeidu
      
      Differential Revision: D13104360
      
      fbshipit-source-id: 9636f5ee2721818f98b33af559fa24292534a72f
      a4e34985
  20. 14 Nov, 2018 1 commit
  21. 13 Nov, 2018 1 commit
  22. 10 Nov, 2018 1 commit
    • Ruty Rinott's avatar
      pipeline for LM training · 880e7cd4
      Ruty Rinott authored
      Summary:
      step 2 of pipeline for LM training
      assumes tokenized text data as input. Splits it into train/validation/test, and runs binarization
      (step a_ii in https://fb.quip.com/kazzAxvZHBj9)
      
      Reviewed By: borguz
      
      Differential Revision: D10454705
      
      fbshipit-source-id: 74e8679041f5507c4e404c1b719547c2ae9ed983
      880e7cd4
  23. 08 Nov, 2018 1 commit
    • Peng-Jen Chen's avatar
      Fix error when training multilingual_translation task with multi-GPU · 189fcabf
      Peng-Jen Chen authored
      Summary:
      D10052908 introduce multilingual_translation task, but it raises exception when training with multiple-GPUs: P60202593
      
      With Myle's help, we found that it is because of improperly handled dummy batch data type, and it causes optimizer.backward() is not executed same number of times cross different GPUs.
      
      Reviewed By: xianxl
      
      Differential Revision: D12964263
      
      fbshipit-source-id: 4991039030bf373f0c484e131acc4736487be4d8
      189fcabf
  24. 07 Nov, 2018 2 commits
    • Myle Ott's avatar
      Merge internal changes · 8eb232ce
      Myle Ott authored
      Summary: Pull Request resolved: https://github.com/pytorch/fairseq/pull/352
      
      Differential Revision: D12956930
      
      Pulled By: myleott
      
      fbshipit-source-id: 39334a79544bac570feb04be9103269d7c1563f9
      8eb232ce
    • Liezl Puzon's avatar
      Support BPE end of word marker suffix in fairseq noising module · 2b13f3c0
      Liezl Puzon authored
      Summary:
      There are 2 ways to implement BPE:
      1. use a continuation marker suffix to indicate that there is at least one more subtoken left in the word
      2. use a end of word marker suffix to indicate that there is no more subtokens left in the word
      
      This adds some logic to account for either kind of BPE marker suffix. This diff adds a corresponding test. I also refactored the test setup to reduce the number of boolean args when setting up test data.
      
      Reviewed By: xianxl
      
      Differential Revision: D12919428
      
      fbshipit-source-id: 405e9f346dce6e736c1305288721dfc7b63e872a
      2b13f3c0
  25. 02 Nov, 2018 1 commit