1. 15 Dec, 2021 2 commits
  2. 09 Dec, 2021 2 commits
  3. 28 Oct, 2021 2 commits
  4. 27 Sep, 2021 2 commits
  5. 31 Aug, 2021 2 commits
  6. 22 Jul, 2021 2 commits
  7. 28 Jun, 2021 1 commit
  8. 25 Jun, 2021 2 commits
  9. 23 Jun, 2021 2 commits
  10. 17 Jun, 2021 2 commits
  11. 14 Jun, 2021 1 commit
  12. 10 Jun, 2021 1 commit
  13. 25 May, 2021 1 commit
  14. 12 May, 2021 2 commits
  15. 29 Apr, 2021 1 commit
  16. 26 Apr, 2021 1 commit
  17. 21 Apr, 2021 1 commit
  18. 13 Apr, 2021 1 commit
  19. 12 Apr, 2021 1 commit
  20. 06 Apr, 2021 2 commits
  21. 23 Mar, 2021 1 commit
  22. 16 Mar, 2021 2 commits
  23. 15 Mar, 2021 1 commit
  24. 08 Mar, 2021 1 commit
  25. 27 Feb, 2021 1 commit
  26. 11 Feb, 2021 1 commit
    • Qbiwan's avatar
      Update run_xnli.py to use Datasets library (#9829) · 8dcfaea0
      Qbiwan authored
      * remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric
      
      * fix
      
      * fix
      
      * fix
      
      * push
      
      * fix
      
      * everything works
      
      * fix init
      
      * fix
      
      * special treatment for sepconv1d
      
      * style
      
      * 馃檹馃徑
      
      * add doc and cleanup
      
      
      * fix doc
      
      * fix doc again
      
      * fix doc again
      
      * Apply suggestions from code review
      
      * make style
      
      * Proposal that should work
      
      * Remove needless code
      
      * Fix test
      
      * Apply suggestions from code review
      
      * remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric
      
      * amend README
      
      * removed data_args.task_name and replaced with task_name = "xnli"; use split function to load train and validation dataset separately; remove __post_init__; remove flag --task_name from README.
      
      * removed dict task_to_keys, use str "xnli" instead of variable task_name, change preprocess_function to use examples["premise"], examples["hypothesis"] directly, remove sentence1_key and sentence2_key, change compute_metrics function to cater only to accuracy metric, add condition for train_langauge is None when using dataset.load_dataset()
      
      * removed `torch.distributed.barrier()` and `import torch` as `from_pretrained` is able to do the work; amend README
      8dcfaea0
  27. 05 Feb, 2021 1 commit
  28. 17 Nov, 2020 1 commit