- 13 Apr, 2021 1 commit
-
-
Philipp Schmid authored
-
- 12 Apr, 2021 1 commit
-
-
Masatoshi TSUCHIYA authored
* model_path is refered as the path of the trainer, and should be ignored as the checkpoint path. * Improved according to Sgugger's comment.
-
- 07 Apr, 2021 1 commit
-
-
Stas Bekman authored
* The 'warn' method is deprecated * fix test
-
- 06 Apr, 2021 2 commits
- 31 Mar, 2021 1 commit
-
-
Sylvain Gugger authored
* First third * Styling and fix mistake * Quality * All the rest * Treat %s and %d * typo * Missing ) * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 25 Mar, 2021 1 commit
-
-
Jethro Kuan authored
Use the correct variable (raw_datasets) instead of the module (datasets) where appropriate.
-
- 23 Mar, 2021 1 commit
-
-
Bhadresh Savani authored
* added predict stage * added test keyword in exception message * removed example specific saving predictions * fixed f-string error * removed extra line Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com>
-
- 16 Mar, 2021 2 commits
- 15 Mar, 2021 1 commit
-
-
Sylvain Gugger authored
* Add minimum version check in examples * Style * No need for new line maybe? * Add helpful comment
-
- 10 Mar, 2021 2 commits
-
-
Sylvain Gugger authored
* Add new GLUE example with no Trainer. * Style * Address review comments
-
Allen Wang authored
Fixes an issue in `text-classification` where MNLI eval/test datasets are not being preprocessed. (#10621) * Fix MNLI tests * Linter fix
-
- 08 Mar, 2021 1 commit
-
-
Bhadresh Savani authored
* reverted changes of logging and saving metrics * added max_sample arguments * fixed code * white space diff * reformetting code * reformatted code
-
- 04 Mar, 2021 3 commits
-
-
Sylvain Gugger authored
-
Sylvain Gugger authored
This reverts commit f3660613.
-
Sylvain Gugger authored
-
- 27 Feb, 2021 1 commit
-
-
Bhadresh Savani authored
* updated logging and saving metrics * space removal
-
- 25 Feb, 2021 1 commit
-
-
Sylvain Gugger authored
-
- 11 Feb, 2021 1 commit
-
-
Qbiwan authored
* remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric * fix * fix * fix * push * fix * everything works * fix init * fix * special treatment for sepconv1d * style *
馃檹 馃徑 * add doc and cleanup * fix doc * fix doc again * fix doc again * Apply suggestions from code review * make style * Proposal that should work * Remove needless code * Fix test * Apply suggestions from code review * remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric * amend README * removed data_args.task_name and replaced with task_name = "xnli"; use split function to load train and validation dataset separately; remove __post_init__; remove flag --task_name from README. * removed dict task_to_keys, use str "xnli" instead of variable task_name, change preprocess_function to use examples["premise"], examples["hypothesis"] directly, remove sentence1_key and sentence2_key, change compute_metrics function to cater only to accuracy metric, add condition for train_langauge is None when using dataset.load_dataset() * removed `torch.distributed.barrier()` and `import torch` as `from_pretrained` is able to do the work; amend README
-
- 08 Feb, 2021 1 commit
-
-
Sylvain Gugger authored
-
- 05 Feb, 2021 1 commit
-
-
Stas Bekman authored
* make executable * make executable * same for the template * cleanup
-
- 02 Feb, 2021 1 commit
-
-
Patrick von Platen authored
* change tokenizer requirement * split line * Correct typo from list to str * improve style * make other function pretty as well * add comment * correct typo * add new test * pass tests for tok without padding token * Apply suggestions from code review
-
- 28 Jan, 2021 1 commit
-
-
Sylvain Gugger authored
-
- 27 Jan, 2021 1 commit
-
-
Sylvain Gugger authored
-
- 26 Jan, 2021 2 commits
-
-
Yusuke Mori authored
-
Andrea Cappelli authored
* Pad to 8x for fp16 multiple choice example (#9752) * Pad to 8x for fp16 squad trainer example (#9752) * Pad to 8x for fp16 ner example (#9752) * Pad to 8x for fp16 swag example (#9752) * Pad to 8x for fp16 qa beam search example (#9752) * Pad to 8x for fp16 qa example (#9752) * Pad to 8x for fp16 seq2seq example (#9752) * Pad to 8x for fp16 glue example (#9752) * Pad to 8x for fp16 new ner example (#9752) * update script template #9752 * Update examples/multiple-choice/run_swag.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update examples/question-answering/run_qa.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update examples/question-answering/run_qa_beam_search.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * improve code quality #9752 Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 25 Jan, 2021 1 commit
-
-
Sylvain Gugger authored
* Auto-resume training from checkpoint * Update examples/text-classification/run_glue.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Roll out to other examples Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 22 Jan, 2021 1 commit
-
-
Stefan Schweter authored
-
- 13 Jan, 2021 2 commits
-
-
Yusuke Mori authored
* Update run_glue for do_predict with local test data (#9442) * Update run_glue (#9442): fix comments ('files' to 'a file') * Update run_glue (#9442): reflect the code review * Update run_glue (#9442): auto format * Update run_glue (#9442): reflect the code review -
Pavel Tarashkevich authored
Co-authored-by:Pavel Tarashkevich <Pavel.Tarashkievich@orange.com>
-
- 06 Jan, 2021 1 commit
-
-
Sylvain Gugger authored
* Allow example to use a revision and work with private models * Copy to other examples and template * Styling
-
- 05 Jan, 2021 1 commit
-
-
Yusuke Mori authored
-
- 21 Dec, 2020 1 commit
-
-
Sylvain Gugger authored
* Update the README of the text classification example * Update examples/README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Adapt comment from review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 19 Dec, 2020 1 commit
-
-
Stas Bekman authored
* add speed metrics * suggestions
-
- 11 Dec, 2020 1 commit
-
-
Sylvain Gugger authored
* Reorganize example folder * Continue reorganization * Change requirements for tests * Final cleanup * Finish regroup with tests all passing * Copyright * Requirements and readme * Make a full link for the documentation * Address review comments * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Add symlink * Reorg again * Apply suggestions from code review Co-authored-by:
Thomas Wolf <thomwolf@users.noreply.github.com> * Adapt title * Update to new strucutre * Remove test * Update READMEs Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Thomas Wolf <thomwolf@users.noreply.github.com>
-
- 05 Dec, 2020 1 commit
-
-
Ethan Perez authored
Without this fix, training a `BARTForSequenceClassification` model with `run_pl_glue.py` gives `TypeError: forward() got an unexpected keyword argument 'token_type_ids'`, because BART does not have token_type_ids. I've solved this issue in the same way as it's solved for the "distilbert" model, and I can train BART models on SNLI without errors now.
-
- 17 Nov, 2020 1 commit
-
-
Julien Chaumond authored
* <small>tiny typo</small> * Tokenizers: ability to load from model subfolder * use subfolder for local files as well * Uniformize model shortcut name => model id * from s3 => from huggingface.co Co-authored-by:Quentin Lhoest <lhoest.q@gmail.com>
-
- 12 Nov, 2020 1 commit
-
-
Julien Plu authored
-
- 30 Oct, 2020 1 commit
-
-
Sylvain Gugger authored
* Finish the cleanup of the language-modeling examples * Update main README * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Apply suggestions from code review Co-authored-by:
Thomas Wolf <thomwolf@users.noreply.github.com> * Propagate changes Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Thomas Wolf <thomwolf@users.noreply.github.com>
-