- 17 Nov, 2020 1 commit
-
-
Julien Chaumond authored
* <small>tiny typo</small> * Tokenizers: ability to load from model subfolder * use subfolder for local files as well * Uniformize model shortcut name => model id * from s3 => from huggingface.co Co-authored-by:Quentin Lhoest <lhoest.q@gmail.com>
-
- 12 Nov, 2020 1 commit
-
-
Julien Plu authored
-
- 15 Sep, 2020 1 commit
-
-
Stas Bekman authored
-
- 26 Aug, 2020 1 commit
-
-
Lysandre authored
-
- 08 Jul, 2020 1 commit
-
-
Ji Xin authored
* Add deebert code * Add readme of deebert * Add test for deebert Update test for Deebert * Update DeeBert (README, class names, function refactoring); remove requirements.txt * Format update * Update test * Update readme and model init methods
-
- 28 Jun, 2020 1 commit
-
-
Sam Shleifer authored
* all save_pretrained methods mkdir if not os.path.exists
-
- 24 Jun, 2020 1 commit
-
-
Kevin Canwen Xu authored
* Fix PABEE division by zero error * patience=0 by default
-
- 20 Jun, 2020 1 commit
-
-
Kevin Canwen Xu authored
* Add BERT Loses Patience (Patience-based Early Exit) * update model archive * update format * sort import * flake8 * Add results * full results * align the table * refactor to inherit * default per gpu eval = 1 * Formatting * Formatting * isort * modify readme * Add check * Fix format * Fix format * Doc strings * ALBERT & BERT for sequence classification don't inherit from the original anymore * Remove incorrect comments * Remove incorrect comments * Remove incorrect comments * Sync up with new code * Sync up with new code * Add a test * Add a test * Add a test * Add a test * Add a test * Add a test * Finishing up!
-
- 02 Jun, 2020 1 commit
-
-
Julien Chaumond authored
* Kill model archive maps * Fixup * Also kill model_archive_map for MaskedBertPreTrainedModel * Unhook config_archive_map * Tokenizers: align with model id changes * make style && make quality * Fix CI
-
- 07 May, 2020 1 commit
-
-
Julien Chaumond authored
* Created using Colaboratory * [examples] reorganize files * remove run_tpu_glue.py as superseded by TPU support in Trainer * Bugfix: int, not tuple * move files around
-
- 20 Apr, 2020 1 commit
-
-
Andrey Kulagin authored
-
- 10 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* Big cleanup of `glue_convert_examples_to_features` * Use batch_encode_plus * Cleaner wrapping of glue_convert_examples_to_features for TF @lysandrejik * Cleanup syntax, thanks to @mfuntowicz * Raise explicit error in case of user error
-
- 01 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* Start cleaning examples * Fixup
-
- 02 Mar, 2020 1 commit
-
-
Victor SANH authored
* fix n_gpu count when no_cuda flag is activated * someone was left behind
-
- 14 Feb, 2020 1 commit
-
-
Manuel Romero authored
-
- 28 Jan, 2020 1 commit
-
-
Lysandre authored
-
- 06 Jan, 2020 2 commits
-
-
alberduris authored
-
alberduris authored
-
- 22 Dec, 2019 5 commits
-
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
This is the result of: $ isort --recursive examples templates transformers utils hubconf.py setup.py
-
- 21 Dec, 2019 1 commit
-
-
Aymeric Augustin authored
This is the result of: $ black --line-length 119 examples templates transformers utils hubconf.py setup.py There's a lot of fairly long lines in the project. As a consequence, I'm picking the longest widely accepted line length, 119 characters. This is also Thomas' preference, because it allows for explicit variable names, to make the code easier to understand.
-
- 11 Dec, 2019 1 commit
-
-
Bilal Khan authored
-
- 03 Dec, 2019 1 commit
-
-
VictorSanh authored
-
- 27 Nov, 2019 7 commits
-
-
VictorSanh authored
-
VictorSanh authored
-
VictorSanh authored
-
VictorSanh authored
-
VictorSanh authored
-
VictorSanh authored
-
VictorSanh authored
-
- 26 Nov, 2019 1 commit
-
-
Lysandre authored
-
- 21 Nov, 2019 1 commit
-
-
Juha Kiili authored
-
- 20 Nov, 2019 1 commit
-
-
Jin Young Sohn authored
TPU runner is currently implemented in: https://github.com/pytorch-tpu/transformers/blob/tpu/examples/run_glue_tpu.py. We plan to upstream this directly into `huggingface/transformers` (either `master` or `tpu`) branch once it's been more thoroughly tested.
-
- 14 Nov, 2019 1 commit
-
-
R茅mi Louf authored
-
- 12 Nov, 2019 1 commit
-
-
ronakice authored
-
- 04 Nov, 2019 1 commit
-
-
thomwolf authored
-
- 21 Oct, 2019 1 commit
-
-
Pasquale Minervini authored
gradient norm clipping should be done right before calling the optimiser - fixing run_glue and run_ner as well
-