1. 24 Mar, 2020 1 commit
  2. 02 Mar, 2020 1 commit
  3. 21 Feb, 2020 1 commit
  4. 04 Feb, 2020 1 commit
    • Yuval Pinter's avatar
      pass langs parameter to certain XLM models (#2734) · d1ab1fab
      Yuval Pinter authored
      * pass langs parameter to certain XLM models
      
      Adding an argument that specifies the language the SQuAD dataset is in so language-sensitive XLMs (e.g. `xlm-mlm-tlm-xnli15-1024`) don't default to language `0`.
      Allows resolution of issue #1799 .
      
      * fixing from `make style`
      
      * fixing style (again)
      d1ab1fab
  5. 28 Jan, 2020 1 commit
  6. 17 Jan, 2020 1 commit
  7. 16 Jan, 2020 1 commit
  8. 08 Jan, 2020 2 commits
  9. 06 Jan, 2020 2 commits
  10. 22 Dec, 2019 5 commits
  11. 21 Dec, 2019 3 commits
    • Aymeric Augustin's avatar
      Reformat source code with black. · fa84ae26
      Aymeric Augustin authored
      This is the result of:
      
          $ black --line-length 119 examples templates transformers utils hubconf.py setup.py
      
      There's a lot of fairly long lines in the project. As a consequence, I'm
      picking the longest widely accepted line length, 119 characters.
      
      This is also Thomas' preference, because it allows for explicit variable
      names, to make the code easier to understand.
      fa84ae26
    • thomwolf's avatar
      fix merge · b03872aa
      thomwolf authored
      b03872aa
    • thomwolf's avatar
      fix merge · 8a2be93b
      thomwolf authored
      8a2be93b
  12. 19 Dec, 2019 1 commit
  13. 16 Dec, 2019 1 commit
  14. 14 Dec, 2019 1 commit
  15. 13 Dec, 2019 2 commits
  16. 12 Dec, 2019 1 commit
  17. 11 Dec, 2019 1 commit
  18. 10 Dec, 2019 2 commits
  19. 09 Dec, 2019 1 commit
  20. 05 Dec, 2019 2 commits
  21. 04 Dec, 2019 3 commits
  22. 03 Dec, 2019 2 commits
    • LysandreJik's avatar
      Working evaluation · de276de1
      LysandreJik authored
      de276de1
    • Ethan Perez's avatar
      Always use SequentialSampler during evaluation · 96e83506
      Ethan Perez authored
      When evaluating, shouldn't we always use the SequentialSampler instead of DistributedSampler? Evaluation only runs on 1 GPU no matter what, so if you use the DistributedSampler with N GPUs, I think you'll only evaluate on 1/N of the evaluation set. That's at least what I'm finding when I run an older/modified version of this repo.
      96e83506
  23. 28 Nov, 2019 1 commit
  24. 26 Nov, 2019 1 commit
  25. 22 Nov, 2019 2 commits