- 07 Sep, 2020 1 commit
-
-
Lysandre authored
-
- 02 Jun, 2020 1 commit
-
-
Jin Young Sohn authored
* Glue task cleaup * Enable writing cache to cache_dir in case dataset lives in readOnly filesystem. * Differentiate match vs mismatch for MNLI metrics. * Style * Fix pytype * Fix type * Use cache_dir in mnli mismatch eval dataset * Small Tweaks Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 21 May, 2020 1 commit
-
-
Zhangyx authored
Adds predict stage for glue tasks, and generate result files which can be submitted to gluebenchmark.com (#4463) * Adds predict stage for glue tasks, and generate result files which could be submitted to gluebenchmark.com website. * Use Split enum + always output the label name Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 19 May, 2020 1 commit
-
-
Julien Chaumond authored
* Distributed eval: SequentialDistributedSampler + gather all results * For consistency only write to disk from world_master Close https://github.com/huggingface/transformers/issues/4272 * Working distributed eval * Hook into scripts * Fix #3721 again * TPU.mesh_reduce: stay in tensor space Thanks @jysohn23 * Just a small comment * whitespace * torch.hub: pip install packaging * Add test scenarii
-
- 08 May, 2020 1 commit
-
-
Julien Chaumond authored
* [TPU] Doc, fix xla_spawn.py, only preprocess dataset once * Update examples/README.md * [xla_spawn] Add `_mp_fn` to other Trainer scripts * [TPU] Fix: eval dataloader was None
-
- 07 May, 2020 2 commits
-
-
Julien Chaumond authored
* Created using Colaboratory * [examples] reorganize files * remove run_tpu_glue.py as superseded by TPU support in Trainer * Bugfix: int, not tuple * move files around
-
Lysandre Debut authored
* wip * wip * a last wip * Better logging when using TPUs * Correct argument name * Tests * fix * Metrics in evaluation * Update src/transformers/training_args.py * [tpu] Use launcher script instead * [tpu] lots of tweaks * Fix formatting Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 01 May, 2020 1 commit
-
-
Julien Chaumond authored
[qol] example scripts: parse args from .args file or JSON
-
- 24 Apr, 2020 1 commit
-
-
Julien Chaumond authored
Close #3921
-
- 22 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* doc * [tests] Add sample files for a regression task * [HUGE] Trainer * Feedback from @sshleifer * Feedback from @thomwolf + logging tweak * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes * [glue] Use default max_seq_length of 128 like before * [glue] move DataTrainingArguments around * [ner] Change interface of InputExample, and align run_{tf,pl} * Re-align the pl scripts a little bit * ner * [ner] Add integration test * Fix language_modeling with API tweak * [ci] Tweak loss target * Don't break console output * amp.initialize: model must be on right device before * [multiple-choice] update for Trainer * Re-align to 827d6d6e
-
- 16 Apr, 2020 1 commit
-
-
Davide Fiocco authored
-
- 10 Apr, 2020 2 commits
-
-
Julien Chaumond authored
* [examples] Generate argparsers from type hints on dataclasses * [HfArgumentParser] way simpler API * Restore run_language_modeling.py for easier diff * [HfArgumentParser] final tweaks from code review
-
Julien Chaumond authored
* Big cleanup of `glue_convert_examples_to_features` * Use batch_encode_plus * Cleaner wrapping of glue_convert_examples_to_features for TF @lysandrejik * Cleanup syntax, thanks to @mfuntowicz * Raise explicit error in case of user error
-
- 01 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* Start cleaning examples * Fixup
-
- 24 Mar, 2020 1 commit
-
-
Julien Chaumond authored
-
- 19 Mar, 2020 1 commit
-
-
mataney authored
* solving bug where for small epochs and large gradient_accumulation_steps we never train * black formatting * no need to change these files
-
- 03 Mar, 2020 1 commit
-
-
Davide Fiocco authored
That's the same fix applied in https://github.com/huggingface/transformers/issues/2258 , but for the GLUE example
-
- 02 Mar, 2020 1 commit
-
-
Victor SANH authored
* fix n_gpu count when no_cuda flag is activated * someone was left behind
-
- 06 Feb, 2020 1 commit
-
-
Peter Izsak authored
-
- 30 Jan, 2020 1 commit
-
-
Hang Le authored
-
- 28 Jan, 2020 1 commit
-
-
Lysandre authored
-
- 07 Jan, 2020 1 commit
-
-
Simone Primarosa authored
* Add support for Albert and XLMRoberta for the Glue example
-
- 06 Jan, 2020 2 commits
-
-
alberduris authored
-
alberduris authored
-
- 22 Dec, 2019 5 commits
-
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
This is the result of: $ isort --recursive examples templates transformers utils hubconf.py setup.py
-
- 21 Dec, 2019 1 commit
-
-
Aymeric Augustin authored
This is the result of: $ black --line-length 119 examples templates transformers utils hubconf.py setup.py There's a lot of fairly long lines in the project. As a consequence, I'm picking the longest widely accepted line length, 119 characters. This is also Thomas' preference, because it allows for explicit variable names, to make the code easier to understand.
-
- 19 Dec, 2019 1 commit
-
-
Stefan Schweter authored
-
- 12 Dec, 2019 1 commit
-
-
Alan deLevie authored
deay -> decay
-
- 11 Dec, 2019 1 commit
-
-
Bilal Khan authored
-
- 03 Dec, 2019 1 commit
-
-
VictorSanh authored
-
- 29 Nov, 2019 1 commit
-
-
Juha Kiili authored
-
- 26 Nov, 2019 1 commit
-
-
Lysandre authored
-
- 21 Nov, 2019 1 commit
-
-
Juha Kiili authored
-
- 20 Nov, 2019 1 commit
-
-
Jin Young Sohn authored
TPU runner is currently implemented in: https://github.com/pytorch-tpu/transformers/blob/tpu/examples/run_glue_tpu.py. We plan to upstream this directly into `huggingface/transformers` (either `master` or `tpu`) branch once it's been more thoroughly tested.
-
- 14 Nov, 2019 1 commit
-
-
R茅mi Louf authored
-
- 12 Nov, 2019 1 commit
-
-
ronakice authored
-