- 11 Aug, 2020 1 commit
-
-
Stas Bekman authored
* add pl_glue example test * for now just test that it runs, next validate results of eval or predict? * complete the run_pl_glue test to validate the actual outcome * worked on my machine, CI gets less accuracy - trying higher epochs * match run_pl.sh hparms * more epochs? * trying higher lr * for now just test that the script runs to a completion * correct the comment * if cuda is available, add --fp16 --gpus=1 to cover more bases * style
-
- 09 Aug, 2020 1 commit
-
-
Sam Shleifer authored
-
- 07 Aug, 2020 2 commits
-
-
Stas Bekman authored
-
Stas Bekman authored
-
- 06 Aug, 2020 2 commits
-
-
Bhashithe Abeysinghe authored
Co-authored-by:Sam Shleifer <sshleifer@gmail.com>
-
xujiaze13 authored
-
- 29 Jul, 2020 1 commit
-
-
Julien Plu authored
* Fully rework training/prediction loops * fix method name * Fix variable name * Fix property name * Fix scope * Fix method name * Fix tuple index * Fix tuple index * Fix indentation * Fix variable name * fix eval before log * Add drop remainder for test dataset * Fix step number + fix logging datetime * fix eval loss value * use global step instead of step + fix logging at step 0 * Fix logging datetime * Fix global_step usage * Fix breaking loop + logging datetime * Fix step in prediction loop * Fix step breaking * Fix train/test loops * Force TF at least 2.2 for the trainer * Use assert_cardinality to facilitate the dataset size computation * Log steps per epoch * Make tfds compliant with TPU * Make tfds compliant with TPU * Use TF dataset enumerate instead of the Python one * revert previous commit * Fix data_dir * Apply style * rebase on master * Address Sylvain's comments * Address Sylvain's and Lysandre comments * Trigger CI * Remove unused import
-
- 01 Jul, 2020 1 commit
-
-
Sylvain Gugger authored
* Cleanup and unify Trainer/TFTrainer * Forgot to adapt TFTrainingArgs * In tf scripts n_gpu -> n_replicas * Update src/transformers/training_args.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Address review comments * Formatting * Fix typo Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 28 Jun, 2020 1 commit
-
-
Sam Shleifer authored
* all save_pretrained methods mkdir if not os.path.exists
-
- 24 Jun, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 23 Jun, 2020 1 commit
-
-
Sam Shleifer authored
-
- 04 Jun, 2020 1 commit
-
-
Jason Phang authored
-
- 02 Jun, 2020 2 commits
-
-
Jin Young Sohn authored
* Glue task cleaup * Enable writing cache to cache_dir in case dataset lives in readOnly filesystem. * Differentiate match vs mismatch for MNLI metrics. * Style * Fix pytype * Fix type * Use cache_dir in mnli mismatch eval dataset * Small Tweaks Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Julien Chaumond authored
* Kill model archive maps * Fixup * Also kill model_archive_map for MaskedBertPreTrainedModel * Unhook config_archive_map * Tokenizers: align with model id changes * make style && make quality * Fix CI
-
- 27 May, 2020 1 commit
-
-
Lysandre Debut authored
* per_device instead of per_gpu/error thrown when argument unknown * [docs] Restore examples.md symlink * Correct absolute links so that symlink to the doc works correctly * Update src/transformers/hf_argparser.py Co-authored-by:
Julien Chaumond <chaumond@gmail.com> * Warning + reorder * Docs * Style * not for squad Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
- 21 May, 2020 1 commit
-
-
Zhangyx authored
Adds predict stage for glue tasks, and generate result files which can be submitted to gluebenchmark.com (#4463) * Adds predict stage for glue tasks, and generate result files which could be submitted to gluebenchmark.com website. * Use Split enum + always output the label name Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 19 May, 2020 1 commit
-
-
Julien Chaumond authored
* Distributed eval: SequentialDistributedSampler + gather all results * For consistency only write to disk from world_master Close https://github.com/huggingface/transformers/issues/4272 * Working distributed eval * Hook into scripts * Fix #3721 again * TPU.mesh_reduce: stay in tensor space Thanks @jysohn23 * Just a small comment * whitespace * torch.hub: pip install packaging * Add test scenarii
-
- 08 May, 2020 1 commit
-
-
Julien Chaumond authored
* [TPU] Doc, fix xla_spawn.py, only preprocess dataset once * Update examples/README.md * [xla_spawn] Add `_mp_fn` to other Trainer scripts * [TPU] Fix: eval dataloader was None
-
- 07 May, 2020 2 commits
-
-
Julien Chaumond authored
-
Julien Chaumond authored
* Created using Colaboratory * [examples] reorganize files * remove run_tpu_glue.py as superseded by TPU support in Trainer * Bugfix: int, not tuple * move files around
-