- 22 Jan, 2021 4 commits
-
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Fixes to run_seq2seq and instructions * Add more defaults for summarization
-
Julien Plu authored
* Fix saved model tests + fix a graph issue in longformer * Apply style
-
Stefan Schweter authored
-
- 21 Jan, 2021 11 commits
-
-
Sylvain Gugger authored
* Fix memory regression in Seq2Seq example * Fix test and properly deal with -100 * Easier condition with device safety * Patch for MBartTokenzierFast
-
Julien Plu authored
* Fix Seq2Seq models for serving * Apply style * Fix lonfgormer * Fix mBart/Pegasus/Blenderbot * Apply style * Add a main intermediate layer * Apply style * Remove import * Apply tf.function to Longformer * Fix utils check_copy * Update S2S template * Fix BART + Blenderbot * Fix BlenderbotSmall * Fix BlenderbotSmall * Fix BlenderbotSmall * Fix MBart * Fix Marian * Fix Pegasus + template * Apply style * Fix common attributes test * Forgot to fix the LED test * Apply Patrick's comment on LED Decoder
-
Nicolas Patry authored
* Changing model default for TableQuestionAnsweringPipeline. - Discussion: https://discuss.huggingface.co/t/table-question-answering-is-not-an-available-task-under-pipeline/3284/6 * Updating slow tests that were out of sync.
-
Julien Plu authored
* Fix Gelu precision * Fix gelu_fast * Naming * Fix usage and apply style * add TF gelu approximate version * add TF gelu approximate version * add TF gelu approximate version * Apply style * Fix albert * Remove the usage of the Activation layer
-
Suraj Patil authored
* fix head mask in model_parallel * pass correct head mask
-
Patrick von Platen authored
-
Patrick von Platen authored
-
guillaume-be authored
* Moved ProphetNetForCausalLM's parent initialization after config update * Added unit tests for generation for ProphetNetForCausalLM
-
Lysandre Debut authored
-
Muennighoff authored
* fix typo Co-authored-by:Suraj Patil <surajp815@gmail.com>
-
Stas Bekman authored
* no --deepspeed and --sharded_ddp together * Update src/transformers/trainer.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * style Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 20 Jan, 2021 17 commits
-
-
Sylvain Gugger authored
-
Darigov Research authored
* fix: Makes small typo corrections & standardises glossary * feat: Adds introduction & links to transformer flashcards * feat: Adds attribution & adjustments requested in #8949 * feat: Adds flashcards to community.md * refactor: Removes flashcards from glossary
-
Sylvain Gugger authored
* Fix WAND_DISABLED test * Remove duplicate import * Make a test that actually works... * Fix style
-
Sylvain Gugger authored
-
Stas Bekman authored
-
Gunjan Chhablani authored
* Fix Trainer and Args to mention AdamW, not Adam. * Update the docs for Training Arguments. * Change arguments adamw_* to adam_* * Fixed links to AdamW in TrainerArguments docs * Fix line length in Training Args docs.
-
NielsRogge authored
-
NielsRogge authored
* Add DebertaForMaskedLM, DebertaForTokenClassification, DebertaForQuestionAnswering * Add docs and fix quality * Fix Deberta not having pooler
-
Sylvain Gugger authored
-
acul3 authored
* Update run_mlm.py * add t5 model to transformers-cli convert * update rum_mlm.py same as master * update converting model docs * update converting model docs * Update convert.py * Trigger notification * update import sorted * fix typo t5
-
Julien Plu authored
-
Julien Plu authored
* Create new embeddings + add to BERT * Add Albert * Add DistilBert * Add Albert + Electra + Funnel * Add Longformer + Lxmert * Add last models * Apply style * Update the template * Remove unused imports * Rename attribute * Import embeddings in their own model file * Replace word_embeddings per weight * fix naming * Fix Albert * Fix Albert * Fix Longformer * Fix Lxmert Mobilebert and MPNet * Fix copy * Fix template * Update the get weights function * Update src/transformers/modeling_tf_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/electra/modeling_tf_electra.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * address Sylvain's comments Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Julien Plu authored
* Fix label datatype * Apply style
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
LSinev authored
-
Sylvain Gugger authored
* Restrain tokenizer.model_max_length default * Fix indent
-
- 19 Jan, 2021 8 commits
-
-
Sylvain Gugger authored
* Fix model templates and use less than 119 chars * Missing new line
-
Daniel Stancl authored
* Add decoder_head_mask for PyTorch T5 model * Add decoder_head_mask args into T5Model and T5ForConditionalGeneration * Slightly change the order of input args to be in accordance with the convention from BART-based models introduced within the PR #9569. * Make style for modeling_t5.py * Add decoder_head_mask for TF T5 models * Separate head_mask and decoder_head_mask args in TF T5 models * Slightly change the order of input args to follow convention of BART-based models updated in PR #9569 * Update test_forward_signature tests/test_modeling_tf_common.py w.r.t. the changed order of input args * Add FutureWarnings for T5 and TFT5 models * Add FutureWarnings for T5 and TFT5 models warning a user that input argument `head_mask` was split into two arguments - `head_mask` and `decoder_head_mask` * Add default behaviour - `decoder_head_mask` is set to copy `head_mask` * Fix T5 modeling and FutureWarning * Make proper usage of head_mask and decoder_head_mask in cross_attention * Fix conditions for raising FutureWarning * Reformat FutureWarning in T5 modeling * Refactor the warning message
-
Sylvain Gugger authored
* New run_seq2seq script * Add tests * Mark as slow * Update examples/seq2seq/run_seq2seq.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/data/data_collator.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update src/transformers/data/data_collator.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Address review comments Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Julien Plu authored
* Fix Flaubert and XLM * Fix Flaubert and XLM * Apply style
-
max yue authored
File "/share/apps/anaconda3/envs/my_env/lib/python3.7/site-packages/transformers/integrations.py", line 419, in __init__ self._SummaryWriter = SummaryWriter UnboundLocalError: local variable 'SummaryWriter' referenced before assignment -
Yusuke Mori authored
* Update past_key_values in gpt2 (#9391) * Update generation_utils, and rename some items * Update modeling_gpt2 to avoid an error in gradient_checkpointing * Remove 'reorder_cache' from util and add variations to XLNet, TransfoXL, GPT-2 * Change the location of '_reorder_cache' in modeling files * Add '_reorder_cache' in modeling_ctrl * Fix a bug of my last commit in CTRL * Add '_reorder_cache' to GPT2DoubleHeadsModel * Manage 'use_cache' in config of test_modeling_gpt2 * Clean up the doc string * Update src/transformers/models/gpt2/modeling_gpt2.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fix the doc string (GPT-2, CTRL) * improve gradient_checkpointing_behavior Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-