1. 24 Sep, 2021 1 commit
  2. 07 Sep, 2021 1 commit
  3. 06 Sep, 2021 1 commit
  4. 02 Sep, 2021 1 commit
    • Nathan Raw's avatar
      Add PyTorch image classification example (#13134) · 76c4d8bf
      Nathan Raw authored
      *  add pytorch image classification example
      
      * 🔥 remove utils.py
      
      * 💄 fix flake8 style issues
      
      * 🔥 remove unnecessary line
      
      *  limit dataset sizes
      
      * 📌 update reqs
      
      * 🎨 restructure - use datasets lib
      
      * 🎨 import transforms directly
      
      * 📝 add comments
      
      * 💄 style
      
      * 🔥 remove flag
      
      * 📌 update requirement warning
      
      * 📝 add vision README.md
      
      * 📝 update README.md
      
      * 📝 update README.md
      
      * 🎨 add image-classification tag to model card
      
      * 🚚 rename vision ️ image-classification
      
      * 📝 update image-classification README.md
      76c4d8bf
  5. 08 Jun, 2021 1 commit
  6. 05 May, 2021 1 commit
  7. 04 May, 2021 1 commit
    • Sylvain Gugger's avatar
      Reproducible checkpoint (#11582) · 6b241e0e
      Sylvain Gugger authored
      * Set generator in dataloader
      
      * Use generator in all random samplers
      
      * Checkpoint all RNG states
      
      * Final version
      
      * Quality
      
      * Test
      
      * Address review comments
      
      * Quality
      
      * Remove debug util
      
      * Add python and numpy RNGs
      
      * Split states in different files in distributed
      
      * Quality
      
      * local_rank for TPUs
      
      * Only use generator when accepted
      
      * Add test
      
      * Set seed to avoid flakiness
      
      * Make test less flaky
      
      * Quality
      6b241e0e
  8. 21 Apr, 2021 1 commit
  9. 15 Mar, 2021 1 commit
  10. 08 Mar, 2021 2 commits
  11. 19 Jan, 2021 1 commit
  12. 14 Jan, 2021 1 commit
    • Sylvain Gugger's avatar
      Switch metrics in run_ner to datasets (#9567) · 46ed56cf
      Sylvain Gugger authored
      * Switch metrics in run_ner to datasets
      
      * Add flag to return all metrics
      
      * Upstream (and rename) sortish_sampler
      
      * Revert "Upstream (and rename) sortish_sampler"
      
      This reverts commit e07d0dcf650c2bae36da011dd76c77a8bb4feb0d.
      46ed56cf
  13. 18 Dec, 2020 2 commits
  14. 11 Dec, 2020 1 commit
  15. 08 Dec, 2020 1 commit
  16. 10 Nov, 2020 1 commit
  17. 09 Nov, 2020 2 commits
  18. 29 Oct, 2020 2 commits
  19. 28 Oct, 2020 1 commit
  20. 22 Oct, 2020 1 commit
  21. 11 Oct, 2020 1 commit
  22. 14 Sep, 2020 2 commits
  23. 31 Aug, 2020 2 commits
  24. 25 Aug, 2020 1 commit
    • Joel Hanson's avatar
      Allow tests in examples to use cuda or fp16,if they are available (#5512) · 4db2fa77
      Joel Hanson authored
      * Allow tests in examples to use cuda or fp16,if they are available
      
      The tests in examples didn't use the cuda or fp16 even if they where available.
      - The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
        the device was take based on the availablity(cuda/cpu).
      - The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument
        which made the test to work without cuda. This example is having issue when running with fp16
        thus it not enabled (got an assertion error for perplexity due to it higher value).
      - The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
        difference in the f1 score.
      - The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available.
      
      Resolves some of: #5057
      
      * Unwanted import of is_apex_available was removed
      
      * Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable
      - run_glue.py: Removed the check for cuda and fp16.
      - run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation.
      
      * Incorrectly sorted imports fixed
      
      * The model needs to be converted to half precision
      
      * Formatted single line if condition statement to multiline
      
      * The torch_device also needed to be checked before running the test on examples
      - The tests in examples which uses cuda should also depend from the USE_CUDA flag,
        similarly to the rest of the test suite. Even if we decide to set USE_CUDA to
        True by default, setting USE_CUDA to False should result in the examples not using CUDA
      
      * Format some of the code in test_examples file
      
      * The improper import of is_apex_available was sorted
      
      * Formatted the code to keep the style standards
      
      * The comma at the end of list giving a flake8 issue was fixed
      
      * Import sort was fixed
      
      * Removed the clean_test_dir function as its not used right now
      4db2fa77
  25. 24 Aug, 2020 1 commit
  26. 17 Aug, 2020 1 commit
  27. 14 Aug, 2020 1 commit
  28. 13 Aug, 2020 1 commit
  29. 11 Aug, 2020 1 commit
    • Stas Bekman's avatar
      add pl_glue example test (#6034) · f6c0680d
      Stas Bekman authored
      * add pl_glue example test
      
      * for now just test that it runs, next validate results of eval or predict?
      
      * complete the run_pl_glue test to validate the actual outcome
      
      * worked on my machine, CI gets less accuracy - trying higher epochs
      
      * match run_pl.sh hparms
      
      * more epochs?
      
      * trying higher lr
      
      * for now just test that the script runs to a completion
      
      * correct the comment
      
      * if cuda is available, add --fp16 --gpus=1 to cover more bases
      
      * style
      f6c0680d
  30. 08 Aug, 2020 1 commit
  31. 16 Jun, 2020 1 commit
  32. 27 May, 2020 1 commit
  33. 13 May, 2020 1 commit
  34. 07 May, 2020 1 commit
    • Julien Chaumond's avatar
      BIG Reorganize examples (#4213) · 0ae96ff8
      Julien Chaumond authored
      * Created using Colaboratory
      
      * [examples] reorganize files
      
      * remove run_tpu_glue.py as superseded by TPU support in Trainer
      
      * Bugfix: int, not tuple
      
      * move files around
      0ae96ff8