1. 17 Oct, 2021 1 commit
  2. 14 Oct, 2021 3 commits
  3. 11 Oct, 2021 1 commit
    • Patrick von Platen's avatar
      [Speech Examples] Add pytorch speech pretraining (#13877) · d45fc7da
      Patrick von Platen authored
      * adapt wav2vec2
      
      * add example
      
      * add files
      
      * adapt
      
      * remove bogus file
      
      * Apply suggestions from code review
      
      * adapt files more
      
      * upload changes
      
      * del old files
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * correct gradient checkpoitning
      
      * add readme
      
      * finish
      
      * finish
      
      * up
      
      * more fixes
      
      * up
      
      * up
      
      * add demo run to readme
      
      * up
      d45fc7da
  4. 01 Oct, 2021 1 commit
  5. 24 Sep, 2021 1 commit
  6. 07 Sep, 2021 1 commit
  7. 06 Sep, 2021 1 commit
  8. 02 Sep, 2021 1 commit
    • Nathan Raw's avatar
      Add PyTorch image classification example (#13134) · 76c4d8bf
      Nathan Raw authored
      *  add pytorch image classification example
      
      * 🔥 remove utils.py
      
      * 💄 fix flake8 style issues
      
      * 🔥 remove unnecessary line
      
      *  limit dataset sizes
      
      * 📌 update reqs
      
      * 🎨 restructure - use datasets lib
      
      * 🎨 import transforms directly
      
      * 📝 add comments
      
      * 💄 style
      
      * 🔥 remove flag
      
      * 📌 update requirement warning
      
      * 📝 add vision README.md
      
      * 📝 update README.md
      
      * 📝 update README.md
      
      * 🎨 add image-classification tag to model card
      
      * 🚚 rename vision ️ image-classification
      
      * 📝 update image-classification README.md
      76c4d8bf
  9. 08 Jun, 2021 1 commit
  10. 05 May, 2021 1 commit
  11. 04 May, 2021 1 commit
    • Sylvain Gugger's avatar
      Reproducible checkpoint (#11582) · 6b241e0e
      Sylvain Gugger authored
      * Set generator in dataloader
      
      * Use generator in all random samplers
      
      * Checkpoint all RNG states
      
      * Final version
      
      * Quality
      
      * Test
      
      * Address review comments
      
      * Quality
      
      * Remove debug util
      
      * Add python and numpy RNGs
      
      * Split states in different files in distributed
      
      * Quality
      
      * local_rank for TPUs
      
      * Only use generator when accepted
      
      * Add test
      
      * Set seed to avoid flakiness
      
      * Make test less flaky
      
      * Quality
      6b241e0e
  12. 21 Apr, 2021 1 commit
  13. 15 Mar, 2021 1 commit
  14. 08 Mar, 2021 2 commits
  15. 19 Jan, 2021 1 commit
  16. 14 Jan, 2021 1 commit
    • Sylvain Gugger's avatar
      Switch metrics in run_ner to datasets (#9567) · 46ed56cf
      Sylvain Gugger authored
      * Switch metrics in run_ner to datasets
      
      * Add flag to return all metrics
      
      * Upstream (and rename) sortish_sampler
      
      * Revert "Upstream (and rename) sortish_sampler"
      
      This reverts commit e07d0dcf650c2bae36da011dd76c77a8bb4feb0d.
      46ed56cf
  17. 18 Dec, 2020 2 commits
  18. 11 Dec, 2020 1 commit
  19. 08 Dec, 2020 1 commit
  20. 10 Nov, 2020 1 commit
  21. 09 Nov, 2020 2 commits
  22. 29 Oct, 2020 2 commits
  23. 28 Oct, 2020 1 commit
  24. 22 Oct, 2020 1 commit
  25. 11 Oct, 2020 1 commit
  26. 14 Sep, 2020 2 commits
  27. 31 Aug, 2020 2 commits
  28. 25 Aug, 2020 1 commit
    • Joel Hanson's avatar
      Allow tests in examples to use cuda or fp16,if they are available (#5512) · 4db2fa77
      Joel Hanson authored
      * Allow tests in examples to use cuda or fp16,if they are available
      
      The tests in examples didn't use the cuda or fp16 even if they where available.
      - The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
        the device was take based on the availablity(cuda/cpu).
      - The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument
        which made the test to work without cuda. This example is having issue when running with fp16
        thus it not enabled (got an assertion error for perplexity due to it higher value).
      - The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
        difference in the f1 score.
      - The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available.
      
      Resolves some of: #5057
      
      * Unwanted import of is_apex_available was removed
      
      * Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable
      - run_glue.py: Removed the check for cuda and fp16.
      - run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation.
      
      * Incorrectly sorted imports fixed
      
      * The model needs to be converted to half precision
      
      * Formatted single line if condition statement to multiline
      
      * The torch_device also needed to be checked before running the test on examples
      - The tests in examples which uses cuda should also depend from the USE_CUDA flag,
        similarly to the rest of the test suite. Even if we decide to set USE_CUDA to
        True by default, setting USE_CUDA to False should result in the examples not using CUDA
      
      * Format some of the code in test_examples file
      
      * The improper import of is_apex_available was sorted
      
      * Formatted the code to keep the style standards
      
      * The comma at the end of list giving a flake8 issue was fixed
      
      * Import sort was fixed
      
      * Removed the clean_test_dir function as its not used right now
      4db2fa77
  29. 24 Aug, 2020 1 commit
  30. 17 Aug, 2020 1 commit
  31. 14 Aug, 2020 1 commit
  32. 13 Aug, 2020 1 commit