- 05 Oct, 2020 13 commits
-
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* PoC on RAG * Format class name/obj name * Better name in message * PoC on one TF model * Add PyTorch and TF dummy objects + script * Treat scikit-learn * Bad copy pastes * Typo
-
Joshua H authored
'The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.' I dont know how to change the 'How to use this model directly from the
馃 /transformers library:' part since it is not part of the model-paper -
Amine Abdaoui authored
* docs(pretrained_models): fix num parameters * fix(pretrained_models): correct typo Co-authored-by:Amin <amin.geotrend@gmail.com>
-
Malte Pietsch authored
* fix squad tokenization for roberta & co * change to pure type based check * sort imports
-
Sylvain Gugger authored
-
Cola authored
*
馃毄 Add `power` argument for TF PolynomialDecay *馃毄 Create default optimizer with power *馃毄 Add argument to training args *馃毃 Clean code format *馃毃 Fix black warning *馃毃 Fix code format -
Lysandre Debut authored
-
Nathan Cooper authored
* Create README.md * Update model_cards/ncoop57/bart-base-code-summarizer-java-v0/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Forrest Iandola authored
* configuration_squeezebert.py thin wrapper around bert tokenizer fix typos wip sb model code wip modeling_squeezebert.py. Next step is to get the multi-layer-output interface working set up squeezebert to use BertModelOutput when returning results. squeezebert documentation formatting allow head mask that is an array of [None, ..., None] docs docs cont'd path to vocab docs and pointers to cloud files (WIP) line length and indentation squeezebert model cards formatting of model cards untrack modeling_squeezebert_scratchpad.py update aws paths to vocab and config files get rid of stub of NSP code, and advise users to pretrain with mlm only fix rebase issues redo rebase of modeling_auto.py fix issues with code formatting more code format auto-fixes move squeezebert before bert in tokenization_auto.py and modeling_auto.py because squeezebert inherits from bert tests for squeezebert modeling and tokenization fix typo move squeezebert before bert in modeling_auto.py to fix inheritance problem disable test_head_masking, since squeezebert doesn't yet implement head masking fix issues exposed by the test_modeling_squeezebert.py fix an issue exposed by test_tokenization_squeezebert.py fix issue exposed by test_modeling_squeezebert.py auto generated code style improvement issue that we inherited from modeling_xxx.py: SqueezeBertForMaskedLM.forward() calls self.cls(), but there is no self.cls, and I think the goal was actually to call self.lm_head() update copyright resolve failing 'test_hidden_states_output' and remove unused encoder_hidden_states and encoder_attention_mask docs add integration test. rename squeezebert-mnli --> squeezebert/squeezebert-mnli autogenerated formatting tweaks integrate feedback from patrickvonplaten and sgugger to programming style and documentation strings * tiny change to order of imports
-
Sylvain Gugger authored
* Cleanup documentation for BART, Marian, MBART and Pegasus * Cleanup documentation for BART, Marian, MBART and Pegasus
-
Alexandr authored
* LayoutLM: add exception handling for bbox values To replicate unhandled error: - In `test_modelling_layoutlm.py` set `range_bbox=1025`, i.e. greater 1024 - Run `pytest tests/test_modeling_layoutlm.py` Requirement for bbox values to be within the range 0-1000 is documented but if it is violated then it isa not clear what is the issue from error message. * Update src/transformers/modeling_layoutlm.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Dhaval Taunk authored
-
- 04 Oct, 2020 2 commits
-
-
Sylvain Gugger authored
-
Suraj Patil authored
-
- 02 Oct, 2020 1 commit
-
-
Sam Shleifer authored
-
- 01 Oct, 2020 24 commits
-
-
Sam Shleifer authored
-
Sam Shleifer authored
-
Sylvain Gugger authored
* Fix seq2seq example test * Fix bad copy-paste * Also save the state
-
Sylvain Gugger authored
* Trainer should not modify its TrainingArguments * Trainer should not modify its TrainingArguments * Trainer should not modify its TrainingArguments * Add test of resumed training * Fixes * Non multiGPU test * Clean Trainer state * Add more to the state * Documentation * One last test * Make resume training test more complete * Unwanted changes
-
Sam Shleifer authored
-
Suraj Patil authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* clean T5 * fix t5 tests * fix index typo * fix tf common test * fix examples * change positional ordering for Bart and FSTM * add signature test * clean docs and add tests * add docs to encoder decoder * clean docs * correct two doc strings * remove sig test for TF Elektra & Funnel * fix tf t5 slow tests * fix input_ids to inputs in tf * Update src/transformers/modeling_bart.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_bart.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * implement lysandre results * make style * fix encoder decoder typo * fix tf slow tests * fix slow tests * renaming * remove unused input Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Muhammad Harris authored
* t5 t5 community notebook added * author link updated * t5 t5 community notebook added * author link updated * new colab link updated Co-authored-by:harris <muhammad.harris@visionx.io>
-
Kai Fricke authored
-
Kai Fricke authored
-
Alexandr authored
Co-authored-by:Alexandr Maslov <avmaslov3@gmail.com>
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Adalberto authored
* Create README.md * language metadata Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Martin M眉ller authored
-
allenyummy authored
-
ahotrod authored
Model now fine-tuned on Transformers 3.1.0, previous out-of-date model was fine-tuned on Transformers 2.3.0.
-
Abed khooli authored
Model card for akhooli/personachat-arabic
-
Bayartsogt Yadamsuren authored
* Creating readme for bert-base-mongolian-cased * Update model_cards/bayartsogt/bert-base-mongolian-cased/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Bayartsogt Yadamsuren authored
-
Akshay Gupta authored
Making transformers readme more robust.
-
Lysandre Debut authored
-
Sam Shleifer authored
* Clean clamp * boom boom * Take some other changes * boom boom * boom boom * boom boom * one chg * fix test * Use finfo * style
-