- 31 Jan, 2021 1 commit
-
-
lewtun authored
* Clarify definition of seed argument in Trainer * Update src/transformers/training_args.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/training_args_tf.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fix style * Update src/transformers/training_args.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 30 Jan, 2021 1 commit
-
-
Stas Bekman authored
Apparently nested markup in RST is invalid: https://docutils.sourceforge.io/FAQ.html#is-nested-inline-markup-possible So currently this line doesn't get rendered properly, leaving inner markdown unrendered, resulting in: ``` https://docutils.sourceforge.io/FAQ.html#is-nested-inline-markup-possible ``` This PR removes the bold which fixes the link.
-
- 29 Jan, 2021 6 commits
-
-
Stas Bekman authored
-
Stas Bekman authored
-
Sylvain Gugger authored
* When on sagemaker use their env variables for saves * Address review comments * Quality
-
Julien Plu authored
-
Ethan Chau authored
-
Nicolas Patry authored
* Adding a new `return_full_text` parameter to TextGenerationPipeline. For text-generation, it's sometimes used as prompting text. In that context, prefixing `generated_text` with the actual input forces the caller to take an extra step to remove it. The proposed change adds a new parameter (for backward compatibility). `return_full_text` that enables the caller to prevent adding the prefix. * Doc quality.
-
- 28 Jan, 2021 11 commits
-
-
abhishek thakur authored
-
abhishek thakur authored
-
Stas Bekman authored
* expand install instructions * fix * white space * rewrite as discussed in the PR * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * change the wording to encourage issue report Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Daniel Stancl authored
* Remove redundant test_head_masking = True flags * Remove all redundant test_head_masking flags in PyTorch test_modeling_* files * Make test_head_masking = True as a default choice in test_modeling_tf_commong.py * Remove all redundant test_head_masking flags in TensorFlow test_modeling_tf_* files * Put back test_head_masking=False fot TFT5 models
-
Joe Davison authored
-
Sylvain Gugger authored
-
Funtowicz Morgan authored
* Fix computation of attention_probs when head_mask is provided. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Apply changes to the template Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-
Nicolas Patry authored
-
Lysandre Debut authored
-
Lysandre Debut authored
* Allow partial loading of a cached tokenizer * Warning > Info * Update src/transformers/tokenization_utils_base.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Raise error if not local_files_only Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
abhishek thakur authored
Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com>
-
- 27 Jan, 2021 21 commits
-
-
Stefan Schweter authored
* tests: add integration tests for new Bort model * bort: add conversion script from Gluonnlp to Transformers
馃殌 * bort: minor cleanup (BORT -> Bort) * add docs * make fix-copies * clean doc a bit * correct docs * Update docs/source/model_doc/bort.rst Co-authored-by:Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update docs/source/model_doc/bort.rst Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * correct dialogpt doc * correct link * Update docs/source/model_doc/bort.rst * Update docs/source/model_doc/dialogpt.rst Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * make style Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Stas Bekman authored
* fix --lr_scheduler_type choices * rewrite to fix for all enum-based cl args * cleanup * adjust test * style * Proposal that should work * Remove needless code * Fix test Co-authored-by:Sylvain Gugger <sylvain.gugger@gmail.com>
-
Sylvain Gugger authored
* Allow --arg Value for booleans in HfArgumentParser * Update last test * Better error message
-
Sylvain Gugger authored
* Whenresuming training from checkpoint, Trainer loads model * Finish cleaning tests * Address review comment * Use global_step from state
-
Lysandre Debut authored
-
Lysandre Debut authored
-
Kiyoung Kim authored
* add tpu_zone and gcp_project in training_args_tf.py * make style Co-authored-by: kykim <kykim>
-
Lysandre Debut authored
-
Julien Plu authored
-
Sylvain Gugger authored
* Add a flag for find_unused_parameters * Apply suggestions from code review Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Remove negation Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com>
-
Julien Plu authored
* Start cleaning BERT * Clean BERT and all those depends of it * Fix attribute name * Apply style * Apply Sylvain's comments * Apply Lysandre's comments * remove unused import
-
tomohideshibata authored
Co-authored-by:Tomohide Shibata <tomshiba@yahoo-corp.jp>
-
Julien Plu authored
* Rework documentation * Update the template * Trigger CI * Restore the warning but with the TF logger * Update convbert doc
-
Nicolas Patry authored
pipeline. - If table is empty then the line that contain `answer[0]` will fail. - This PR add a check to prevent `answer[0]`. - Also adds an early check for presence of `table` and `query` to prevent late failure and give better error message. - Adds a few tests to make sure these errors are correctly raised.
-
Patrick von Platen authored
-
jncasey authored
* Fix auto-resume training from checkpoint * style fixes
-
Sylvain Gugger authored
-
Julien Plu authored
-
Patrick von Platen authored
* update jaxlib * Update setup.py * update table
-
abhishek thakur authored
* finalize convbert * finalize convbert * fix * fix * fix * push * fix * tf image patches * fix torch model * tf tests * conversion * everything aligned * remove print * tf tests * fix tf * make tf tests pass * everything works * fix init * fix * special treatment for sepconv1d * style *
馃檹 馃徑 * add doc and cleanup * add electra test again * fix doc * fix doc again * fix doc again * Update src/transformers/modeling_tf_pytorch_utils.py Co-authored-by:Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/models/conv_bert/configuration_conv_bert.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update docs/source/model_doc/conv_bert.rst Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/auto/configuration_auto.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/conv_bert/configuration_conv_bert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * conv_bert -> convbert * more fixes from review * add conversion script * dont use pretrained embed * unused config * suggestions from julien * some more fixes * p -> param * fix copyright * fix doc * Update src/transformers/models/convbert/configuration_convbert.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * comments from reviews * fix-copies * fix style * revert shape_list Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
-