1. 13 May, 2019 1 commit
  2. 12 May, 2019 2 commits
  3. 11 May, 2019 2 commits
  4. 10 May, 2019 2 commits
  5. 09 May, 2019 5 commits
  6. 08 May, 2019 7 commits
  7. 07 May, 2019 5 commits
  8. 06 May, 2019 5 commits
  9. 05 May, 2019 3 commits
  10. 04 May, 2019 4 commits
  11. 03 May, 2019 2 commits
  12. 02 May, 2019 2 commits
    • Peng-Jen Chen's avatar
      Make learned positional embedding optional · 39264559
      Peng-Jen Chen authored
      Summary:
      - Add learned positional embedding binary flag to masked LM model.
      - Add base arch config for masked LM model which sets all the binary parameters to False. Otherwise some of the binary flag parameters will always be override by config in `xlm_architecture` (e.g. encoder_learned_pos)
      
      Reviewed By: liezl200
      
      Differential Revision: D15054487
      
      fbshipit-source-id: d78827f352b9160a89c9dc4f45b9fce15a2f234d
      39264559
    • Myle Ott's avatar
      Move distributed_init into DistributedFairseqModel (#687) · 34726d56
      Myle Ott authored
      Summary:
      This should make rendezvous happen as lazily as possible.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/687
      
      Differential Revision: D15151145
      
      Pulled By: myleott
      
      fbshipit-source-id: d70816a85414c5d509a6b12e2b339b4736db2c88
      34726d56