1. 18 Nov, 2020 21 commits
  2. 17 Nov, 2020 14 commits
  3. 16 Nov, 2020 5 commits
    • Julien Plu's avatar
      Fix mixed precision issue for GPT2 (#8572) · 90150733
      Julien Plu authored
      * Fix mixed precision issue for GPT2
      
      * Forgot one cast
      
      * oops
      
      * Forgotten casts
      90150733
    • Sylvain Gugger's avatar
      Switch `return_dict` to `True` by default. (#8530) · 1073a2bd
      Sylvain Gugger authored
      * Use the CI to identify failing tests
      
      * Remove from all examples and tests
      
      * More default switch
      
      * Fixes
      
      * More test fixes
      
      * More fixes
      
      * Last fixes hopefully
      
      * Use the CI to identify failing tests
      
      * Remove from all examples and tests
      
      * More default switch
      
      * Fixes
      
      * More test fixes
      
      * More fixes
      
      * Last fixes hopefully
      
      * Run on the real suite
      
      * Fix slow tests
      1073a2bd
    • Sylvain Gugger's avatar
      Update version to v4.0.0-dev (#8568) · 0d0a0785
      Sylvain Gugger authored
      0d0a0785
    • LSinev's avatar
      Fix GPT2DoubleHeadsModel to work with model.generate() (#6601) · afb50c66
      LSinev authored
      * Fix passing token_type_ids during GPT2DoubleHeadsModel.generate() if used
      
      and for GPT2LMHeadModel too
      
      * Update tests to check token_type_ids usage in GPT2 models
      afb50c66
    • Yusuke Mori's avatar
      Adding the prepare_seq2seq_batch function to ProphetNet (#8515) · 04d8136b
      Yusuke Mori authored
      * Simply insert T5Tokenizer's prepare_seq2seq_batch
      
      * Update/Add some 'import'
      
      * fix RunTimeError caused by '.view'
      
      * Moves .view related error avoidance from seq2seq_trainer to inside prophetnet
      
      * Update test_tokenization_prophetnet.py
      
      * Format the test code with black
      
      * Re-format the test code
      
      * Update test_tokenization_prophetnet.py
      
      * Add importing require_torch in the test code
      
      * Add importing BatchEncoding in the test code
      
      * Re-format the test code on Colab
      04d8136b