- 05 Jun, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 04 Jun, 2020 1 commit
-
-
Julien Plu authored
* Better None gradients handling * Apply Style * Apply Style * Create a loss class per task to compute its respective loss * Add loss classes to the ALBERT TF models * Add loss classes to the BERT TF models * Add question answering and multiple choice to TF Camembert * Remove prints * Add multiple choice model to TF DistilBERT + loss computation * Add question answering model to TF Electra + loss computation * Add token classification, question answering and multiple choice models to TF Flaubert * Add multiple choice model to TF Roberta + loss computation * Add multiple choice model to TF XLM + loss computation * Add multiple choice and question answering models to TF XLM-Roberta * Add multiple choice model to TF XLNet + loss computation * Remove unused parameters * Add task loss classes * Reorder TF imports + add new model classes * Add new model classes * Bugfix in TF T5 model * Bugfix for TF T5 tests * Bugfix in TF T5 model * Fix TF T5 model tests * Fix T5 tests + some renaming * Fix inheritance issue in the AutoX tests * Add tests for TF Flaubert and TF XLM Roberta * Add tests for TF Flaubert and TF XLM Roberta * Remove unused piece of code in the TF trainer * bugfix and remove unused code * Bugfix for TF 2.2 * Apply Style * Divide TFSequenceClassificationAndMultipleChoiceLoss into their two respective name * Apply style * Mirror the PT Trainer in the TF one: fp16, optimizers and tb_writer as class parameter and better dataset handling * Fix TF optimizations tests and apply style * Remove useless parameter * Bugfix and apply style * Fix TF Trainer prediction * Now the TF models return the loss such as their PyTorch couterparts * Apply Style * Ignore some tests output * Take into account the SQuAD cls_index, p_mask and is_impossible parameters for the QuestionAnswering task models. * Fix names for SQuAD data * Apply Style * Fix conflicts with 2.11 release * Fix conflicts with 2.11 * Fix wrongname * Add better documentation on the new create_optimizer function * Fix isort * logging_dir: use same default as PyTorch Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 07 May, 2020 1 commit
-
-
Julien Chaumond authored
* Created using Colaboratory * [examples] reorganize files * remove run_tpu_glue.py as superseded by TPU support in Trainer * Bugfix: int, not tuple * move files around
-
- 05 May, 2020 1 commit
-
-
Boris Dayma authored
* feat: add logging through Weights & Biases * feat(wandb): make logging compatible with all scripts * style(trainer.py): fix formatting * [Trainer] Tweak wandb integration Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 22 Apr, 2020 1 commit
-
-
Julien Chaumond authored
* doc * [tests] Add sample files for a regression task * [HUGE] Trainer * Feedback from @sshleifer * Feedback from @thomwolf + logging tweak * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes * [glue] Use default max_seq_length of 128 like before * [glue] move DataTrainingArguments around * [ner] Change interface of InputExample, and align run_{tf,pl} * Re-align the pl scripts a little bit * ner * [ner] Add integration test * Fix language_modeling with API tweak * [ci] Tweak loss target * Don't break console output * amp.initialize: model must be on right device before * [multiple-choice] update for Trainer * Re-align to 827d6d6e
-
- 23 Feb, 2020 1 commit
-
-
Patrick von Platen authored
-
- 17 Feb, 2020 1 commit
-
-
Patrick von Platen authored
-
- 06 Jan, 2020 2 commits
-
-
alberduris authored
-
alberduris authored
-
- 12 Nov, 2019 1 commit
-
-
Julien Chaumond authored
-
- 09 Oct, 2019 1 commit
-
-
LysandreJik authored
-
- 04 Oct, 2019 1 commit
-
-
keskarnitish authored
adding conversion script adding first draft of modeling & tokenization adding placeholder for test files bunch of changes registering the tokenizer/model/etc tests change link; something is very VERY wrong here weird end-of-word thingy going on i think the tokenization works now ; wrote the unit tests overall structure works;load w next the monster is alive! works after some cleanup as well adding emacs autosave to gitignore currently only supporting the 48 layer one; seems to infer fine on my macbook cleanup fixing some documentation fixing some documentation tests passing? now works on CUDA also adding greedy? adding greedy sampling works well
-
- 24 Sep, 2019 1 commit
-
-
thomwolf authored
-
- 05 Sep, 2019 1 commit
-
-
VictorSanh authored
-
- 20 Aug, 2019 2 commits
- 09 Jul, 2019 1 commit
-
-
thomwolf authored
-
- 24 Jun, 2019 1 commit
-
-
thomwolf authored
-
- 20 Jun, 2019 1 commit
-
-
thomwolf authored
-
- 05 Feb, 2019 2 commits
- 15 Jan, 2019 1 commit
-
-
thomwolf authored
-
- 05 Nov, 2018 1 commit
-
-
thomwolf authored
-
- 31 Oct, 2018 1 commit
-
-
thomwolf authored
-
- 30 Oct, 2018 1 commit
-
-
thomwolf authored
-
- 29 Oct, 2018 1 commit
-
-
Thomas Wolf authored
-