1. 28 Aug, 2020 2 commits
    • Sam Shleifer's avatar
      prepare_seq2seq_batch makes labels/ decoder_input_ids made later. (#6654) · 9336086a
      Sam Shleifer authored
      * broken test
      
      * batch parity
      
      * tests pass
      
      * boom boom
      
      * boom boom
      
      * split out bart tokenizer tests
      
      * fix tests
      
      * boom boom
      
      * Fixed dataset bug
      
      * Fix marian
      
      * Undo extra
      
      * Get marian working
      
      * Fix t5 tok tests
      
      * Test passing
      
      * Cleanup
      
      * better assert msg
      
      * require torch
      
      * Fix mbart tests
      
      * undo extra decoder_attn_mask change
      
      * Fix import
      
      * pegasus tokenizer can ignore src_lang kwargs
      
      * unused kwarg test cov
      
      * boom boom
      
      * add todo for pegasus issue
      
      * cover one word translation edge case
      
      * Cleanup
      
      * doc
      9336086a
    • Sam Shleifer's avatar
      PL: --adafactor option (#6776) · fb78a90d
      Sam Shleifer authored
      fb78a90d
  2. 27 Aug, 2020 3 commits
  3. 26 Aug, 2020 2 commits
  4. 25 Aug, 2020 2 commits
    • Joel Hanson's avatar
      Allow tests in examples to use cuda or fp16,if they are available (#5512) · 4db2fa77
      Joel Hanson authored
      * Allow tests in examples to use cuda or fp16,if they are available
      
      The tests in examples didn't use the cuda or fp16 even if they where available.
      - The text classification example (`run_glue.py`) didn't use the fp16 even if it was available but
        the device was take based on the availablity(cuda/cpu).
      - The language-modeling example (`run_language_modeling.py`) was having `--no_cuda` argument
        which made the test to work without cuda. This example is having issue when running with fp16
        thus it not enabled (got an assertion error for perplexity due to it higher value).
      - The cuda and fp16 is not enabled for question-answering example (`run_squad.py`) as it is having a
        difference in the f1 score.
      - The text-generation example (`run_generation.py`) will take the cuda or fp16 whenever it is available.
      
      Resolves some of: #5057
      
      * Unwanted import of is_apex_available was removed
      
      * Made changes to test examples file to have the pass --fp16 only if cuda and apex is avaliable
      - run_glue.py: Removed the check for cuda and fp16.
      - run_generation.py: Removed the check for cuda and fp16 also removed unwanted flag creation.
      
      * Incorrectly sorted imports fixed
      
      * The model needs to be converted to half precision
      
      * Formatted single line if condition statement to multiline
      
      * The torch_device also needed to be checked before running the test on examples
      - The tests in examples which uses cuda should also depend from the USE_CUDA flag,
        similarly to the rest of the test suite. Even if we decide to set USE_CUDA to
        True by default, setting USE_CUDA to False should result in the examples not using CUDA
      
      * Format some of the code in test_examples file
      
      * The improper import of is_apex_available was sorted
      
      * Formatted the code to keep the style standards
      
      * The comma at the end of list giving a flake8 issue was fixed
      
      * Import sort was fixed
      
      * Removed the clean_test_dir function as its not used right now
      4db2fa77
    • Sam Shleifer's avatar
      [s2s] round bleu, rouge to 4 digits (#6704) · 0344428f
      Sam Shleifer authored
      0344428f
  5. 24 Aug, 2020 2 commits
  6. 18 Aug, 2020 1 commit
  7. 17 Aug, 2020 4 commits
  8. 16 Aug, 2020 1 commit
  9. 14 Aug, 2020 1 commit
  10. 13 Aug, 2020 3 commits
  11. 12 Aug, 2020 3 commits
  12. 11 Aug, 2020 9 commits
  13. 10 Aug, 2020 2 commits
  14. 09 Aug, 2020 1 commit
  15. 08 Aug, 2020 3 commits
  16. 07 Aug, 2020 1 commit