- 01 Jul, 2020 1 commit
-
-
Sylvain Gugger authored
* Cleanup and unify Trainer/TFTrainer * Forgot to adapt TFTrainingArgs * In tf scripts n_gpu -> n_replicas * Update src/transformers/training_args.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Address review comments * Formatting * Fix typo Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 13 May, 2020 1 commit
-
-
Julien Plu authored
* Add QA trainer example for TF * Make data_dir optional * Fix parameter logic * Fix feature convert * Update the READMEs to add the question-answering task * Apply style * Change 'sequence-classification' to 'text-classification' and prefix with 'eval' all the metric names * Apply style * Apply style
-
- 12 May, 2020 1 commit
-
-
Viktor Alm authored
* catch gpu len 1 set to gpu0 * Add mpc to trainer * Add MPC for TF * fix TF automodel for MPC and add Albert * Apply style * Fix import * Note to self: double check * Make shape None, None for datasetgenerator output shapes * Add from_pt bool which doesnt seem to work * Original checkpoint dir * Fix docstrings for automodel * Update readme and apply style * Colab should probably not be from users * Colabs should probably not be from users * Add colab * Update README.md * Update README.md * Cleanup __intit__ * Cleanup flake8 trailing comma * Update src/transformers/training_args_tf.py * Update src/transformers/modeling_tf_auto.py Co-authored-by:
Viktor Alm <viktoralm@pop-os.localdomain> Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
- 08 May, 2020 1 commit
-
-
Julien Chaumond authored
* [TPU] Doc, fix xla_spawn.py, only preprocess dataset once * Update examples/README.md * [xla_spawn] Add `_mp_fn` to other Trainer scripts * [TPU] Fix: eval dataloader was None
-
- 07 May, 2020 1 commit
-
-
Julien Chaumond authored
* Created using Colaboratory * [examples] reorganize files * remove run_tpu_glue.py as superseded by TPU support in Trainer * Bugfix: int, not tuple * move files around
-
- 24 Apr, 2020 1 commit
-
-
Julien Chaumond authored
Close #3921
-
- 22 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* doc * [tests] Add sample files for a regression task * [HUGE] Trainer * Feedback from @sshleifer * Feedback from @thomwolf + logging tweak * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes * [glue] Use default max_seq_length of 128 like before * [glue] move DataTrainingArguments around * [ner] Change interface of InputExample, and align run_{tf,pl} * Re-align the pl scripts a little bit * ner * [ner] Add integration test * Fix language_modeling with API tweak * [ci] Tweak loss target * Don't break console output * amp.initialize: model must be on right device before * [multiple-choice] update for Trainer * Re-align to 827d6d6e
-
- 20 Apr, 2020 1 commit
-
-
Andrey Kulagin authored
-
- 06 Apr, 2020 1 commit
-
-
Ethan Perez authored
* Fix RoBERTa/XLNet Pad Token in run_multiple_choice.py `convert_examples_to_fes atures` sets `pad_token=0` by default, which is correct for BERT but incorrect for RoBERTa (`pad_token=1`) and XLNet (`pad_token=5`). I think the other arguments to `convert_examples_to_features` are correct, but it might be helpful if someone checked who is more familiar with this part of the codebase. * Simplifying change to match recent commits
-
- 01 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* Start cleaning examples * Fixup
-
- 02 Mar, 2020 1 commit
-
-
Victor SANH authored
* fix n_gpu count when no_cuda flag is activated * someone was left behind
-
- 28 Jan, 2020 1 commit
-
-
Lysandre authored
-
- 06 Jan, 2020 2 commits
-
-
alberduris authored
-
alberduris authored
-
- 23 Dec, 2019 1 commit
-
-
Aymeric Augustin authored
-
- 22 Dec, 2019 6 commits
-
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
This is the result of: $ isort --recursive examples templates transformers utils hubconf.py setup.py
-
- 21 Dec, 2019 1 commit
-
-
Aymeric Augustin authored
This is the result of: $ black --line-length 119 examples templates transformers utils hubconf.py setup.py There's a lot of fairly long lines in the project. As a consequence, I'm picking the longest widely accepted line length, 119 characters. This is also Thomas' preference, because it allows for explicit variable names, to make the code easier to understand.
-
- 03 Dec, 2019 1 commit
-
-
VictorSanh authored
-
- 14 Nov, 2019 1 commit
-
-
R茅mi Louf authored
-
- 12 Nov, 2019 1 commit
-
-
ronakice authored
-
- 04 Nov, 2019 1 commit
-
-
thomwolf authored
-
- 08 Oct, 2019 1 commit
-
-
Bilal Khan authored
-
- 04 Oct, 2019 1 commit
-
-
Julien Chaumond authored
-
- 03 Oct, 2019 1 commit
-
-
Brian Ma authored
-
- 30 Sep, 2019 1 commit
-
-
Julien Chaumond authored
-
- 26 Sep, 2019 1 commit
-
-
thomwolf authored
-
- 18 Sep, 2019 2 commits