- 27 May, 2020 1 commit
-
-
Lysandre Debut authored
* per_device instead of per_gpu/error thrown when argument unknown * [docs] Restore examples.md symlink * Correct absolute links so that symlink to the doc works correctly * Update src/transformers/hf_argparser.py Co-authored-by:
Julien Chaumond <chaumond@gmail.com> * Warning + reorder * Docs * Style * not for squad Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
- 22 May, 2020 3 commits
-
-
Julien Chaumond authored
As discussed w/ @lysandrejik packaging is maintained by PyPA (the Python Packaging Authority), and should be lightweight and stable
-
Lysandre authored
-
Lysandre authored
-
- 21 May, 2020 2 commits
-
-
Lysandre authored
-
Lysandre Debut authored
* TPU hangs when saving optimizer/scheduler * Style * ParallelLoader is not a DataLoader * Style * Addressing @julien-c's comments
-
- 19 May, 2020 4 commits
-
-
Shaoyen authored
* Map optimizer to correct device after loading from checkpoint. * Make style test pass Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Julien Chaumond authored
-
Julien Chaumond authored
* Distributed eval: SequentialDistributedSampler + gather all results * For consistency only write to disk from world_master Close https://github.com/huggingface/transformers/issues/4272 * Working distributed eval * Hook into scripts * Fix #3721 again * TPU.mesh_reduce: stay in tensor space Thanks @jysohn23 * Just a small comment * whitespace * torch.hub: pip install packaging * Add test scenarii
-
Rakesh Chada authored
* makes fetching last learning late in trainer backward compatible * split comment to multiple lines * fixes black styling issue * uses version to create a more explicit logic
-
- 14 May, 2020 2 commits
-
-
Suraj Patil authored
* fix loss calculation in evaluation * fix evaluation on TPU when prediction_loss_only is True
-
Lysandre Debut authored
-
- 13 May, 2020 1 commit
-
-
Julien Chaumond authored
* Improvements to the wandb integration * small reorg + no global necessary * feat(trainer): log epoch and final metrics * Simplify logging a bit * Fixup * Fix crash when just running eval Co-authored-by:
Chris Van Pelt <vanpelt@gmail.com> Co-authored-by:
Boris Dayma <boris.dayma@gmail.com>
-
- 08 May, 2020 1 commit
-
-
Julien Chaumond authored
* [TPU] Doc, fix xla_spawn.py, only preprocess dataset once * Update examples/README.md * [xla_spawn] Add `_mp_fn` to other Trainer scripts * [TPU] Fix: eval dataloader was None
-
- 07 May, 2020 3 commits
-
-
Julien Chaumond authored
* Created using Colaboratory * [examples] reorganize files * remove run_tpu_glue.py as superseded by TPU support in Trainer * Bugfix: int, not tuple * move files around
-
Julien Chaumond authored
cc @patrickvonplaten @thomwolf
-
Lysandre Debut authored
* wip * wip * a last wip * Better logging when using TPUs * Correct argument name * Tests * fix * Metrics in evaluation * Update src/transformers/training_args.py * [tpu] Use launcher script instead * [tpu] lots of tweaks * Fix formatting Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 06 May, 2020 1 commit
-
-
Julien Plu authored
* First commit to add a TF version of the trainer. * Make the TF trainer closer to what looks the PT trainer * Refactoring common code between the PT and TF trainer into an util file. * Some bugfix + better similarity with the PT trainer * Add missing class in transformers init * Bugfix over prediction + use classification report instead of simple metrics * Fix name error * Fix optimization tests + style * Apply style * Several bugfix for multi-gpu training * Apply style * Apply style * Add glue example for the TF trainer * Several bugix + address the reviews * Fix on the TF training args file * Add a debug mode * Bugfix in utils_ner.py when segment_ids is None * Apply style * Apply style * Add TPU strategy * Fix selection strategy
-
- 05 May, 2020 2 commits
-
-
-
Boris Dayma authored
* feat: add logging through Weights & Biases * feat(wandb): make logging compatible with all scripts * style(trainer.py): fix formatting * [Trainer] Tweak wandb integration Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 04 May, 2020 1 commit
-
-
jaymody authored
-
- 01 May, 2020 1 commit
-
-
Suraj Parmar authored
* Continue training args * Continue training args * added explaination * added explaination * added explaination * Fixed tqdm auto * Update src/transformers/training_args.py Co-Authored-By:
Julien Chaumond <chaumond@gmail.com> * Update src/transformers/training_args.py * Update src/transformers/training_args.py Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
- 23 Apr, 2020 2 commits
-
-
Julien Chaumond authored
Close #3920
-
Julien Chaumond authored
-
- 22 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* doc * [tests] Add sample files for a regression task * [HUGE] Trainer * Feedback from @sshleifer * Feedback from @thomwolf + logging tweak * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes * [glue] Use default max_seq_length of 128 like before * [glue] move DataTrainingArguments around * [ner] Change interface of InputExample, and align run_{tf,pl} * Re-align the pl scripts a little bit * ner * [ner] Add integration test * Fix language_modeling with API tweak * [ci] Tweak loss target * Don't break console output * amp.initialize: model must be on right device before * [multiple-choice] update for Trainer * Re-align to 827d6d6e
-