1. 18 Jun, 2025 1 commit
  2. 26 Nov, 2019 1 commit
  3. 10 Nov, 2019 1 commit
  4. 09 Nov, 2019 1 commit
  5. 05 Nov, 2019 1 commit
    • ngoyal2707's avatar
      XLM-R code and model release (#900) · e23e5eaa
      ngoyal2707 authored
      Summary:
      TODO:
      1) Need to update bibtex entry
      2) Need to upload models, spm_vocab and dict.txt to public s3 location.
      
      For Future:
      
      1) I will probably add instructions to finetune on XNLI and NER, POS etc. but currently no timeline for that.
      Pull Request resolved: https://github.com/fairinternal/fairseq-py/pull/900
      
      Reviewed By: myleott
      
      Differential Revision: D18333076
      
      Pulled By: myleott
      
      fbshipit-source-id: 3f3d3716fcc41c78d2dd4525f60b519abbd0459c
      e23e5eaa
  6. 05 Oct, 2019 1 commit
  7. 30 Sep, 2019 1 commit
  8. 27 Sep, 2019 1 commit
    • Changhan Wang's avatar
      Levenshtein Transformer paper code · 86857a58
      Changhan Wang authored
      Summary:
      Code for our NeurIPS paper [Levenshtein Transformer](https://arxiv.org/abs/1905.11006)
      * Added Levenshtein Transformer model, task and criterion class
      * Added iterative NAT Transformer, insertion Transformer and CMLM Transformer model class for baselines
      * Add an option for prepending BOS to dictionary class and translation task class
      
      Reviewed By: myleott
      
      Differential Revision: D17297372
      
      fbshipit-source-id: 54eca60831ae95dc721c2c34e882e1810ee575c7
      86857a58
  9. 14 Aug, 2019 1 commit
  10. 13 Aug, 2019 1 commit
  11. 09 Aug, 2019 1 commit
  12. 30 Jul, 2019 1 commit
  13. 29 Jul, 2019 2 commits
  14. 20 Jun, 2019 1 commit
  15. 11 Jun, 2019 1 commit
  16. 30 May, 2019 1 commit
    • Khoa Ho's avatar
      Clarify mixed precision training support (#766) · d5f76d74
      Khoa Ho authored
      Summary:
      Change the wording to avoid confusion. Mixed precision ensures both higher arithmetic throughput and numerical stability, not exactly synonymous to pure half-precision/FP16 training. Also add mentioning of tensor cores since older generation GPUs without tensor cores don't support true mixed precision training.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/766
      
      Differential Revision: D15559565
      
      Pulled By: myleott
      
      fbshipit-source-id: c71e720772657bb3e8ad330b58bf69e23beb614e
      d5f76d74
  17. 29 Apr, 2019 1 commit
  18. 15 Mar, 2019 1 commit
    • Myle Ott's avatar
      0.6.1 -> 0.6.2 (#577) · e6422528
      Myle Ott authored
      Summary:
      Changelog:
      - 998ba4f: Add language models from Baevski & Auli (2018)
      - 4294c4f6: Add mixture of experts code from Shen et al. (2019)
      - 00493490: Add example for multilingual training
      - 48d9afbe: Speed improvements, including fused operators from apex
      - 44d27e64: Add Tensorboard support
      - d17fa851: Add Adadelta optimizer
      - 9e1c880f: Add `FairseqEncoderModel`
      - b65c579b: Add `FairseqTask.inference_step` to modularize generate.py
      - 2ad1178e: Add back `--curriculum`
      - Misc bug fixes and other features
      
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/577
      
      Differential Revision: D14481233
      
      Pulled By: myleott
      
      fbshipit-source-id: 4ff8625ef1c0b24273fc65df7c5658e3c932e8b7
      e6422528
  19. 14 Mar, 2019 1 commit
  20. 23 Feb, 2019 1 commit
  21. 22 Feb, 2019 1 commit
  22. 09 Feb, 2019 1 commit
    • Myle Ott's avatar
      Add fairseq to PyPI (#495) · fbd4cef9
      Myle Ott authored
      Summary:
      - fairseq can now be installed via pip: `pip install fairseq`
      - command-line tools are globally accessible: `fairseq-preprocess`, `fairseq-train`, `fairseq-generate`, etc.
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/495
      
      Differential Revision: D14017761
      
      Pulled By: myleott
      
      fbshipit-source-id: 10c9f6634a3056074eac2f33324b4f1f404d4235
      fbd4cef9
  23. 25 Jan, 2019 1 commit
  24. 14 Jan, 2019 1 commit
    • Huihui Fan's avatar
      Fixes (#442) · d9284ee7
      Huihui Fan authored
      Summary:
      minor fixes:
      1- adding fairseq logo
      2- encoder padding for fconv self att
      3- legacy ddp change
      Pull Request resolved: https://github.com/pytorch/fairseq/pull/442
      
      Differential Revision: D13651715
      
      Pulled By: myleott
      
      fbshipit-source-id: ac93c80f1dbffdfe03fbd4b8a8ea527aecb576a7
      d9284ee7
  25. 05 Jan, 2019 1 commit
  26. 02 Oct, 2018 1 commit
  27. 24 Sep, 2018 1 commit
  28. 18 Sep, 2018 2 commits
  29. 03 Sep, 2018 1 commit
  30. 27 Jul, 2018 1 commit
  31. 02 Jul, 2018 1 commit
  32. 16 Jun, 2018 1 commit
  33. 15 Jun, 2018 4 commits
  34. 01 May, 2018 1 commit
  35. 01 Mar, 2018 1 commit