- 20 Oct, 2020 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 19 Oct, 2020 2 commits
-
-
Weizhen authored
* add new model prophetnet prophetnet modified modify codes as suggested v1 add prophetnet test files * still bugs, because of changed output formats of encoder and decoder * move prophetnet into the latest version * clean integration tests * clean tokenizers * add xlm config to init * correct typo in init * further refactoring * continue refactor * save parallel * add decoder_attention_mask * fix use_cache vs. past_key_values * fix common tests * change decoder output logits * fix xlm tests * make common tests pass * change model architecture * add tokenizer tests * finalize model structure * no weight mapping * correct n-gram stream attention mask as discussed with qweizhen * remove unused import * fix index.rst * fix tests * delete unnecessary code * add fast integration test * rename weights * final weight remapping * save intermediate * Descriptions for Prophetnet Config File * finish all models * finish new model outputs * delete unnecessary files * refactor encoder layer * add dummy docs * code quality * fix tests * add model pages to doctree * further refactor * more refactor, more tests * finish code refactor and tests * remove unnecessary files * further clean up * add docstring template * finish tokenizer doc * finish prophetnet * fix copies * fix typos * fix tf tests * fix fp16 * fix tf test 2nd try * fix code quality * add test for each model * merge new tests to branch * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md Co-authored-by:
Sam Shleifer <sshleifer@gmail.com> * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md Co-authored-by:
Sam Shleifer <sshleifer@gmail.com> * Update src/transformers/modeling_prophetnet.py Co-authored-by:
Sam Shleifer <sshleifer@gmail.com> * Update utils/check_repo.py Co-authored-by:
Sam Shleifer <sshleifer@gmail.com> * apply sams and sylvains comments * make style * remove unnecessary code * Update README.md Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update README.md Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/configuration_prophetnet.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * implement lysandres comments * correct docs * fix isort * fix tokenizers * fix copies Co-authored-by:
weizhen <weizhen@mail.ustc.edu.cn> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sam Shleifer <sshleifer@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Jordi Mas authored
* Julibert model card * Fix text
-
- 16 Oct, 2020 2 commits
-
-
Patrick von Platen authored
-
rmroczkowski authored
* HerBERT transformer model for Polish language understanding. * HerbertTokenizerFast generated with HerbertConverter * Herbert base and large model cards * Herbert model cards with tags * Herbert tensorflow models * Herbert model tests based on Bert test suit * src/transformers/tokenization_herbert.py edited online with Bitbucket * src/transformers/tokenization_herbert.py edited online with Bitbucket * docs/source/model_doc/herbert.rst edited online with Bitbucket * Herbert tokenizer tests and bug fixes * src/transformers/configuration_herbert.py edited online with Bitbucket * Copyrights and tests for TFHerbertModel * model_cards/allegro/herbert-base-cased/README.md edited online with Bitbucket * model_cards/allegro/herbert-large-cased/README.md edited online with Bitbucket * Bug fixes after testing * Reformat modified_only_fixup * Proper order of configuration * Herbert proper documentation formatting * Formatting with make modified_only_fixup * Dummies fixed * Adding missing models to documentation * Removing HerBERT model as it is a simple extension of BERT * Update model_cards/allegro/herbert-base-cased/README.md Co-authored-by:
Julien Chaumond <chaumond@gmail.com> * Update model_cards/allegro/herbert-large-cased/README.md Co-authored-by:
Julien Chaumond <chaumond@gmail.com> * HerbertTokenizer deprecated configuration removed Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
- 15 Oct, 2020 4 commits
-
-
David S. Lim authored
* model card for bert-base-NER * add meta data up top Co-authored-by:
Julien Chaumond <chaumond@gmail.com> Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
Julien Chaumond authored
cc @Narsil @mfuntowicz @joeddav
-
Julien Chaumond authored
see d99ed7ad
-
Julien Chaumond authored
-
- 14 Oct, 2020 6 commits
-
-
Nils Reimers authored
* Create README.md * Update model_cards/sentence-transformers/LaBSE/README.md Co-authored-by:
Julien Chaumond <chaumond@gmail.com> Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
sarahlintang authored
* Create README.md * Update model_cards/sarahlintang/IndoBERT/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Julien Chaumond authored
-
Zhuosheng Zhang authored
-
Sagor Sarker authored
-
XiaoqiJiao authored
-
- 12 Oct, 2020 1 commit
-
-
Alex Combessie authored
-
- 10 Oct, 2020 1 commit
-
-
Andrew Kane authored
-
- 09 Oct, 2020 1 commit
-
-
Joe Davison authored
-
- 07 Oct, 2020 9 commits
-
-
Blaise Cruz authored
-
Bobby Donchev authored
* Create README.md * Update README.md * Apply suggestions from code review Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Keshan authored
* [Model card] SinhalaBERTo model. This is the model card for keshan/SinhalaBERTo model. * Update model_cards/keshan/SinhalaBERTo/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Amine Abdaoui authored
Co-authored-by:Amin <amin.geotrend@gmail.com>
-
Abed khooli authored
-
dartrevan authored
-
Ilias Chalkidis authored
Minor changes: Add arxiv link + Layout improvement + fix typos
-
Abhilash Majumder authored
-
Julien Chaumond authored
by @nikkon3
-
- 06 Oct, 2020 4 commits
-
-
Ahmed Elnaggar authored
It should be T5-3B not T5-3M.
-
cedspam authored
-
Ilias Chalkidis authored
* Create README.md Model description for all LEGAL-BERT models, published as part of "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2018, In Findings of EMNLP 2020 * Update model_cards/nlpaueb/legal-bert-base-uncased/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Ahmed Elnaggar authored
* Add ProtT5-XL-BFD model card * Apply suggestions from code review Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 05 Oct, 2020 3 commits
-
-
Joshua H authored
'The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.' I dont know how to change the 'How to use this model directly from the
馃 /transformers library:' part since it is not part of the model-paper -
Nathan Cooper authored
* Create README.md * Update model_cards/ncoop57/bart-base-code-summarizer-java-v0/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Forrest Iandola authored
* configuration_squeezebert.py thin wrapper around bert tokenizer fix typos wip sb model code wip modeling_squeezebert.py. Next step is to get the multi-layer-output interface working set up squeezebert to use BertModelOutput when returning results. squeezebert documentation formatting allow head mask that is an array of [None, ..., None] docs docs cont'd path to vocab docs and pointers to cloud files (WIP) line length and indentation squeezebert model cards formatting of model cards untrack modeling_squeezebert_scratchpad.py update aws paths to vocab and config files get rid of stub of NSP code, and advise users to pretrain with mlm only fix rebase issues redo rebase of modeling_auto.py fix issues with code formatting more code format auto-fixes move squeezebert before bert in tokenization_auto.py and modeling_auto.py because squeezebert inherits from bert tests for squeezebert modeling and tokenization fix typo move squeezebert before bert in modeling_auto.py to fix inheritance problem disable test_head_masking, since squeezebert doesn't yet implement head masking fix issues exposed by the test_modeling_squeezebert.py fix an issue exposed by test_tokenization_squeezebert.py fix issue exposed by test_modeling_squeezebert.py auto generated code style improvement issue that we inherited from modeling_xxx.py: SqueezeBertForMaskedLM.forward() calls self.cls(), but there is no self.cls, and I think the goal was actually to call self.lm_head() update copyright resolve failing 'test_hidden_states_output' and remove unused encoder_hidden_states and encoder_attention_mask docs add integration test. rename squeezebert-mnli --> squeezebert/squeezebert-mnli autogenerated formatting tweaks integrate feedback from patrickvonplaten and sgugger to programming style and documentation strings * tiny change to order of imports
-
- 01 Oct, 2020 5 commits
-
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Adalberto authored
* Create README.md * language metadata Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Martin M眉ller authored
-
allenyummy authored
-