- 26 Aug, 2020 1 commit
-
-
Lysandre authored
-
- 17 Aug, 2020 1 commit
-
-
Sam Shleifer authored
-
- 11 Aug, 2020 2 commits
-
-
Stas Bekman authored
-
Stas Bekman authored
* add pl_glue example test * for now just test that it runs, next validate results of eval or predict? * complete the run_pl_glue test to validate the actual outcome * worked on my machine, CI gets less accuracy - trying higher epochs * match run_pl.sh hparms * more epochs? * trying higher lr * for now just test that the script runs to a completion * correct the comment * if cuda is available, add --fp16 --gpus=1 to cover more bases * style
-
- 09 Aug, 2020 1 commit
-
-
Sam Shleifer authored
-
- 07 Aug, 2020 1 commit
-
-
Stas Bekman authored
-
- 06 Aug, 2020 2 commits
-
-
Bhashithe Abeysinghe authored
Co-authored-by:Sam Shleifer <sshleifer@gmail.com>
-
xujiaze13 authored
-
- 23 Jun, 2020 1 commit
-
-
Sam Shleifer authored
-
- 07 May, 2020 1 commit
-
-
Julien Chaumond authored
* Created using Colaboratory * [examples] reorganize files * remove run_tpu_glue.py as superseded by TPU support in Trainer * Bugfix: int, not tuple * move files around
-
- 06 May, 2020 1 commit
-
-
Simone Primarosa authored
-
- 02 May, 2020 1 commit
-
-
William Falcon authored
-
- 22 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* doc * [tests] Add sample files for a regression task * [HUGE] Trainer * Feedback from @sshleifer * Feedback from @thomwolf + logging tweak * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes * [glue] Use default max_seq_length of 128 like before * [glue] move DataTrainingArguments around * [ner] Change interface of InputExample, and align run_{tf,pl} * Re-align the pl scripts a little bit * ner * [ner] Add integration test * Fix language_modeling with API tweak * [ci] Tweak loss target * Don't break console output * amp.initialize: model must be on right device before * [multiple-choice] update for Trainer * Re-align to 827d6d6e
-
- 10 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* Big cleanup of `glue_convert_examples_to_features` * Use batch_encode_plus * Cleaner wrapping of glue_convert_examples_to_features for TF @lysandrejik * Cleanup syntax, thanks to @mfuntowicz * Raise explicit error in case of user error
-
- 01 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* Start cleaning examples * Fixup
-
- 30 Mar, 2020 1 commit
-
-
Ethan Perez authored
* Using loaded checkpoint with --do_predict Without this fix, I'm getting near-random validation performance for a trained model, and the validation performance differs per validation run. I think this happens since the `model` variable isn't set with the loaded checkpoint, so I'm using a randomly initialized model. Looking at the model activations, they differ each time I run evaluation (but they don't with this fix). * Update checkpoint loading * Fixing model loading
-
- 17 Mar, 2020 1 commit
-
-
Nathan Raw authored
*
✨ Alter base pl transformer to use automodels *🐛 Add batch size env variable to function call *💄 Apply black code style from Makefile *🚚 Move lightning base out of ner directory *✨ Add lightning glue example *💄 self * move _feature_file to base class *✨ Move eval logging to custom callback *💄 Apply black code style *🐛 Add parent to pythonpath, remove copy command *🐛 Add missing max_length kwarg
-