1. 26 Aug, 2020 1 commit
  2. 17 Aug, 2020 1 commit
  3. 11 Aug, 2020 2 commits
    • Stas Bekman's avatar
    • Stas Bekman's avatar
      add pl_glue example test (#6034) · f6c0680d
      Stas Bekman authored
      * add pl_glue example test
      
      * for now just test that it runs, next validate results of eval or predict?
      
      * complete the run_pl_glue test to validate the actual outcome
      
      * worked on my machine, CI gets less accuracy - trying higher epochs
      
      * match run_pl.sh hparms
      
      * more epochs?
      
      * trying higher lr
      
      * for now just test that the script runs to a completion
      
      * correct the comment
      
      * if cuda is available, add --fp16 --gpus=1 to cover more bases
      
      * style
      f6c0680d
  4. 09 Aug, 2020 1 commit
  5. 07 Aug, 2020 1 commit
  6. 06 Aug, 2020 2 commits
  7. 23 Jun, 2020 1 commit
  8. 07 May, 2020 1 commit
    • Julien Chaumond's avatar
      BIG Reorganize examples (#4213) · 0ae96ff8
      Julien Chaumond authored
      * Created using Colaboratory
      
      * [examples] reorganize files
      
      * remove run_tpu_glue.py as superseded by TPU support in Trainer
      
      * Bugfix: int, not tuple
      
      * move files around
      0ae96ff8
  9. 06 May, 2020 1 commit
  10. 02 May, 2020 1 commit
  11. 22 Apr, 2020 1 commit
    • Julien Chaumond's avatar
      Trainer (#3800) · dd9d483d
      Julien Chaumond authored
      * doc
      
      * [tests] Add sample files for a regression task
      
      * [HUGE] Trainer
      
      * Feedback from @sshleifer
      
      * Feedback from @thomwolf + logging tweak
      
      * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes
      
      * [glue] Use default max_seq_length of 128 like before
      
      * [glue] move DataTrainingArguments around
      
      * [ner] Change interface of InputExample, and align run_{tf,pl}
      
      * Re-align the pl scripts a little bit
      
      * ner
      
      * [ner] Add integration test
      
      * Fix language_modeling with API tweak
      
      * [ci] Tweak loss target
      
      * Don't break console output
      
      * amp.initialize: model must be on right device before
      
      * [multiple-choice] update for Trainer
      
      * Re-align to 827d6d6e
      dd9d483d
  12. 10 Apr, 2020 1 commit
  13. 01 Apr, 2020 1 commit
  14. 30 Mar, 2020 1 commit
    • Ethan Perez's avatar
      [Bug fix] Using loaded checkpoint with --do_predict (instead of… (#3437) · e5c393dc
      Ethan Perez authored
      * Using loaded checkpoint with --do_predict
      
      Without this fix, I'm getting near-random validation performance for a trained model, and the validation performance differs per validation run. I think this happens since the `model` variable isn't set with the loaded checkpoint, so I'm using a randomly initialized model. Looking at the model activations, they differ each time I run evaluation (but they don't with this fix).
      
      * Update checkpoint loading
      
      * Fixing model loading
      e5c393dc
  15. 17 Mar, 2020 1 commit
    • Nathan Raw's avatar
      [WIP] Lightning glue example (#3290) · 930c9412
      Nathan Raw authored
      *  Alter base pl transformer to use automodels
      
      * 🐛 Add batch size env variable to function call
      
      * 💄 Apply black code style from Makefile
      
      * 🚚 Move lightning base out of ner directory
      
      *  Add lightning glue example
      
      * 💄 self
      
      * move _feature_file to base class
      
      *  Move eval logging to custom callback
      
      * 💄 Apply black code style
      
      * 🐛 Add parent to pythonpath, remove copy command
      
      * 🐛 Add missing max_length kwarg
      930c9412