1. 04 Mar, 2021 2 commits
  2. 27 Feb, 2021 1 commit
  3. 25 Feb, 2021 1 commit
  4. 08 Feb, 2021 1 commit
  5. 05 Feb, 2021 1 commit
  6. 28 Jan, 2021 1 commit
  7. 27 Jan, 2021 1 commit
  8. 26 Jan, 2021 2 commits
  9. 25 Jan, 2021 1 commit
  10. 13 Jan, 2021 2 commits
  11. 06 Jan, 2021 1 commit
  12. 05 Jan, 2021 1 commit
  13. 19 Dec, 2020 1 commit
  14. 17 Nov, 2020 1 commit
  15. 12 Nov, 2020 1 commit
  16. 30 Oct, 2020 1 commit
  17. 28 Oct, 2020 1 commit
  18. 22 Oct, 2020 1 commit
  19. 21 Sep, 2020 2 commits
  20. 07 Sep, 2020 1 commit
  21. 02 Jun, 2020 1 commit
  22. 21 May, 2020 1 commit
  23. 19 May, 2020 1 commit
  24. 08 May, 2020 1 commit
  25. 07 May, 2020 2 commits
    • Julien Chaumond's avatar
      BIG Reorganize examples (#4213) · 0ae96ff8
      Julien Chaumond authored
      * Created using Colaboratory
      
      * [examples] reorganize files
      
      * remove run_tpu_glue.py as superseded by TPU support in Trainer
      
      * Bugfix: int, not tuple
      
      * move files around
      0ae96ff8
    • Lysandre Debut's avatar
      Tpu trainer (#4146) · ebf80e2e
      Lysandre Debut authored
      
      
      * wip
      
      * wip
      
      * a last wip
      
      * Better logging when using TPUs
      
      * Correct argument name
      
      * Tests
      
      * fix
      
      * Metrics in evaluation
      
      * Update src/transformers/training_args.py
      
      * [tpu] Use launcher script instead
      
      * [tpu] lots of tweaks
      
      * Fix formatting
      Co-authored-by: default avatarJulien Chaumond <chaumond@gmail.com>
      ebf80e2e
  26. 01 May, 2020 1 commit
  27. 24 Apr, 2020 1 commit
  28. 22 Apr, 2020 1 commit
    • Julien Chaumond's avatar
      Trainer (#3800) · dd9d483d
      Julien Chaumond authored
      * doc
      
      * [tests] Add sample files for a regression task
      
      * [HUGE] Trainer
      
      * Feedback from @sshleifer
      
      * Feedback from @thomwolf + logging tweak
      
      * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes
      
      * [glue] Use default max_seq_length of 128 like before
      
      * [glue] move DataTrainingArguments around
      
      * [ner] Change interface of InputExample, and align run_{tf,pl}
      
      * Re-align the pl scripts a little bit
      
      * ner
      
      * [ner] Add integration test
      
      * Fix language_modeling with API tweak
      
      * [ci] Tweak loss target
      
      * Don't break console output
      
      * amp.initialize: model must be on right device before
      
      * [multiple-choice] update for Trainer
      
      * Re-align to 827d6d6e
      dd9d483d
  29. 16 Apr, 2020 1 commit
  30. 10 Apr, 2020 2 commits
  31. 01 Apr, 2020 1 commit
  32. 24 Mar, 2020 1 commit
  33. 19 Mar, 2020 1 commit
  34. 03 Mar, 2020 1 commit