"tests/models/gpt2/test_tokenization_gpt2.py" did not exist on "a3c5883f2c9a12360cee0734dfb262f92b912b24"
  1. 21 Apr, 2021 1 commit
  2. 13 Apr, 2021 1 commit
  3. 12 Apr, 2021 1 commit
  4. 06 Apr, 2021 2 commits
  5. 23 Mar, 2021 1 commit
  6. 16 Mar, 2021 2 commits
  7. 15 Mar, 2021 1 commit
  8. 08 Mar, 2021 1 commit
  9. 27 Feb, 2021 1 commit
  10. 11 Feb, 2021 1 commit
    • Qbiwan's avatar
      Update run_xnli.py to use Datasets library (#9829) · 8dcfaea0
      Qbiwan authored
      * remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric
      
      * fix
      
      * fix
      
      * fix
      
      * push
      
      * fix
      
      * everything works
      
      * fix init
      
      * fix
      
      * special treatment for sepconv1d
      
      * style
      
      * 馃檹馃徑
      
      * add doc and cleanup
      
      
      * fix doc
      
      * fix doc again
      
      * fix doc again
      
      * Apply suggestions from code review
      
      * make style
      
      * Proposal that should work
      
      * Remove needless code
      
      * Fix test
      
      * Apply suggestions from code review
      
      * remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric
      
      * amend README
      
      * removed data_args.task_name and replaced with task_name = "xnli"; use split function to load train and validation dataset separately; remove __post_init__; remove flag --task_name from README.
      
      * removed dict task_to_keys, use str "xnli" instead of variable task_name, change preprocess_function to use examples["premise"], examples["hypothesis"] directly, remove sentence1_key and sentence2_key, change compute_metrics function to cater only to accuracy metric, add condition for train_langauge is None when using dataset.load_dataset()
      
      * removed `torch.distributed.barrier()` and `import torch` as `from_pretrained` is able to do the work; amend README
      8dcfaea0
  11. 05 Feb, 2021 1 commit
  12. 17 Nov, 2020 1 commit
  13. 12 Nov, 2020 1 commit
  14. 15 Sep, 2020 1 commit
  15. 26 Aug, 2020 1 commit
  16. 28 Jun, 2020 1 commit
  17. 02 Jun, 2020 1 commit
    • Julien Chaumond's avatar
      Kill model archive maps (#4636) · d4c2cb40
      Julien Chaumond authored
      * Kill model archive maps
      
      * Fixup
      
      * Also kill model_archive_map for MaskedBertPreTrainedModel
      
      * Unhook config_archive_map
      
      * Tokenizers: align with model id changes
      
      * make style && make quality
      
      * Fix CI
      d4c2cb40
  18. 07 May, 2020 1 commit
    • Julien Chaumond's avatar
      BIG Reorganize examples (#4213) · 0ae96ff8
      Julien Chaumond authored
      * Created using Colaboratory
      
      * [examples] reorganize files
      
      * remove run_tpu_glue.py as superseded by TPU support in Trainer
      
      * Bugfix: int, not tuple
      
      * move files around
      0ae96ff8
  19. 20 Apr, 2020 1 commit
  20. 10 Apr, 2020 1 commit
  21. 01 Apr, 2020 1 commit
  22. 02 Mar, 2020 1 commit
  23. 14 Feb, 2020 1 commit
  24. 28 Jan, 2020 1 commit
  25. 06 Jan, 2020 2 commits
  26. 22 Dec, 2019 5 commits
  27. 21 Dec, 2019 1 commit
    • Aymeric Augustin's avatar
      Reformat source code with black. · fa84ae26
      Aymeric Augustin authored
      This is the result of:
      
          $ black --line-length 119 examples templates transformers utils hubconf.py setup.py
      
      There's a lot of fairly long lines in the project. As a consequence, I'm
      picking the longest widely accepted line length, 119 characters.
      
      This is also Thomas' preference, because it allows for explicit variable
      names, to make the code easier to understand.
      fa84ae26
  28. 11 Dec, 2019 1 commit
  29. 03 Dec, 2019 1 commit
  30. 27 Nov, 2019 4 commits