"...gpu/git@developer.sourcefind.cn:gaoqiong/migraphx.git" did not exist on "b6a9b597f05c0214368797bd922cc7136785fccb"
  1. 28 Oct, 2020 1 commit
  2. 24 Aug, 2020 1 commit
  3. 17 Aug, 2020 1 commit
  4. 13 Aug, 2020 1 commit
  5. 09 Aug, 2020 1 commit
  6. 15 Jun, 2020 1 commit
  7. 07 May, 2020 1 commit
    • Julien Chaumond's avatar
      BIG Reorganize examples (#4213) · 0ae96ff8
      Julien Chaumond authored
      * Created using Colaboratory
      
      * [examples] reorganize files
      
      * remove run_tpu_glue.py as superseded by TPU support in Trainer
      
      * Bugfix: int, not tuple
      
      * move files around
      0ae96ff8
  8. 06 May, 2020 1 commit
  9. 02 May, 2020 1 commit
  10. 22 Apr, 2020 1 commit
    • Julien Chaumond's avatar
      Trainer (#3800) · dd9d483d
      Julien Chaumond authored
      * doc
      
      * [tests] Add sample files for a regression task
      
      * [HUGE] Trainer
      
      * Feedback from @sshleifer
      
      * Feedback from @thomwolf + logging tweak
      
      * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes
      
      * [glue] Use default max_seq_length of 128 like before
      
      * [glue] move DataTrainingArguments around
      
      * [ner] Change interface of InputExample, and align run_{tf,pl}
      
      * Re-align the pl scripts a little bit
      
      * ner
      
      * [ner] Add integration test
      
      * Fix language_modeling with API tweak
      
      * [ci] Tweak loss target
      
      * Don't break console output
      
      * amp.initialize: model must be on right device before
      
      * [multiple-choice] update for Trainer
      
      * Re-align to 827d6d6e
      dd9d483d
  11. 01 Apr, 2020 1 commit
  12. 30 Mar, 2020 1 commit
    • Ethan Perez's avatar
      [Bug fix] Using loaded checkpoint with --do_predict (instead of… (#3437) · e5c393dc
      Ethan Perez authored
      * Using loaded checkpoint with --do_predict
      
      Without this fix, I'm getting near-random validation performance for a trained model, and the validation performance differs per validation run. I think this happens since the `model` variable isn't set with the loaded checkpoint, so I'm using a randomly initialized model. Looking at the model activations, they differ each time I run evaluation (but they don't with this fix).
      
      * Update checkpoint loading
      
      * Fixing model loading
      e5c393dc
  13. 17 Mar, 2020 1 commit
    • Nathan Raw's avatar
      [WIP] Lightning glue example (#3290) · 930c9412
      Nathan Raw authored
      *  Alter base pl transformer to use automodels
      
      * 🐛 Add batch size env variable to function call
      
      * 💄 Apply black code style from Makefile
      
      * 🚚 Move lightning base out of ner directory
      
      *  Add lightning glue example
      
      * 💄 self
      
      * move _feature_file to base class
      
      *  Move eval logging to custom callback
      
      * 💄 Apply black code style
      
      * 🐛 Add parent to pythonpath, remove copy command
      
      * 🐛 Add missing max_length kwarg
      930c9412
  14. 10 Mar, 2020 1 commit
    • Shubham Agarwal's avatar
      NER - pl example (#3180) · 5ca356a4
      Shubham Agarwal authored
      * 1. seqeval required by ner pl example. install from examples/requirements. 2. unrecognized arguments: save_steps
      
      * pl checkpoint callback filenotfound error: make directory and pass
      
      * #3159 pl checkpoint path difference
      
      * 1. Updated Readme for pl 2. pl script now also correct displays logs 3. pass gpu ids compared to number of gpus
      
      * Updated results in readme
      
      * 1. updated readme 2. removing deprecated pl methods 3. finalizing scripts
      
      * comment length check
      
      * using deprecated validation_end for stable results
      
      * style related changes
      5ca356a4
  15. 27 Feb, 2020 1 commit
  16. 20 Feb, 2020 1 commit