1. 22 Feb, 2021 2 commits
  2. 19 Feb, 2021 2 commits
  3. 18 Feb, 2021 2 commits
  4. 17 Feb, 2021 1 commit
  5. 16 Feb, 2021 1 commit
  6. 15 Feb, 2021 2 commits
  7. 12 Feb, 2021 1 commit
  8. 11 Feb, 2021 2 commits
    • Stas Bekman's avatar
      [DeepSpeed in notebooks] Jupyter + Colab (#10130) · b54cb0bd
      Stas Bekman authored
      * init devices/setup explicitly
      
      * docs + test
      
      * simplify
      
      * cleanup
      
      * cleanup
      
      * cleanup
      
      * correct the required dist setup
      
      * derive local_rank from env LOCAL_RANK
      b54cb0bd
    • Qbiwan's avatar
      Update run_xnli.py to use Datasets library (#9829) · 8dcfaea0
      Qbiwan authored
      * remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric
      
      * fix
      
      * fix
      
      * fix
      
      * push
      
      * fix
      
      * everything works
      
      * fix init
      
      * fix
      
      * special treatment for sepconv1d
      
      * style
      
      * 馃檹馃徑
      
      * add doc and cleanup
      
      
      * fix doc
      
      * fix doc again
      
      * fix doc again
      
      * Apply suggestions from code review
      
      * make style
      
      * Proposal that should work
      
      * Remove needless code
      
      * Fix test
      
      * Apply suggestions from code review
      
      * remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric
      
      * amend README
      
      * removed data_args.task_name and replaced with task_name = "xnli"; use split function to load train and validation dataset separately; remove __post_init__; remove flag --task_name from README.
      
      * removed dict task_to_keys, use str "xnli" instead of variable task_name, change preprocess_function to use examples["premise"], examples["hypothesis"] directly, remove sentence1_key and sentence2_key, change compute_metrics function to cater only to accuracy metric, add condition for train_langauge is None when using dataset.load_dataset()
      
      * removed `torch.distributed.barrier()` and `import torch` as `from_pretrained` is able to do the work; amend README
      8dcfaea0
  9. 10 Feb, 2021 2 commits
  10. 09 Feb, 2021 2 commits
  11. 08 Feb, 2021 6 commits
  12. 05 Feb, 2021 2 commits
  13. 03 Feb, 2021 2 commits
  14. 02 Feb, 2021 1 commit
  15. 01 Feb, 2021 3 commits
  16. 29 Jan, 2021 1 commit
  17. 28 Jan, 2021 1 commit
  18. 27 Jan, 2021 1 commit
  19. 26 Jan, 2021 3 commits
  20. 25 Jan, 2021 1 commit
  21. 23 Jan, 2021 1 commit
  22. 22 Jan, 2021 1 commit