- 13 Jul, 2020 4 commits
-
-
Sylvain Gugger authored
* Fix Trainer in DataParallel setting * Fix typo Co-authored-by:Sam Shleifer <sshleifer@gmail.com>
-
Stas Bekman authored
-
Stas Bekman authored
-
onepointconsulting authored
Added general description, information about the tags and also some example usage code.
-
- 12 Jul, 2020 1 commit
-
-
Kevin Canwen Xu authored
* Add model type check for pipelines * Add model type check for pipelines * rename func * Fix the init parameters * Fix format * rollback unnecessary refactor
-
- 11 Jul, 2020 1 commit
-
-
Kevin Canwen Xu authored
* Add Microsoft's CodeBERT * link style * single modal * unused import
-
- 10 Jul, 2020 20 commits
-
-
Sylvain Gugger authored
* Document model outputs * Update docs/source/main_classes/output.rst Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Sylvain Gugger authored
-
Tomo Lazovich authored
-
Patrick von Platen authored
-
Julien Chaumond authored
Co-Authored-By:
Suraj Patil <surajp815@gmail.com> Co-Authored-By:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Bashar Talafha authored
* Update README.md * Update README.md
-
Manuel Romero authored
-
kolk authored
-
Txus authored
-
Teven authored
Fixed use of memories in XLNet (caching for language generation + warning when loading improper memoryless model) (#5632) * Pytorch gpu => cpu proper device * Memoryless XLNet warning + fixed memories during generation * Revert "Pytorch gpu => cpu proper device" This reverts commit 93489b36 * made black happy * TF generation with memories * dim => axis * added padding_text to TF XL models * Added comment, added TF
-
Manuel Romero authored
-
Manuel Romero authored
Create model card for T5-small fine-tuned on SQUAD v2
-
Nils Reimers authored
Model card for sentence-transformers/bert-base-nli-cls-token
-
Nils Reimers authored
Model card for sentence-transformers/bert-base-nli-max-tokens
-
Sylvain Gugger authored
* [WIP] Proposal for model outputs * All Bert models * Make CI green maybe? * Fix ONNX test * Isolate ModelOutput from pt and tf * Formatting * Add Electra models * Auto-generate docstrings from outputs * Add TF outputs * Add some BERT models * Revert TF side * Remove last traces of TF changes * Fail with a clear error message * Add Albert and work through Bart * Add CTRL and DistilBert * Formatting * Progress on Bart * Renames and finish Bart * Formatting * Fix last test * Add DPR * Finish Electra and add FlauBERT * Add GPT2 * Add Longformer * Add MMBT * Add MobileBert * Add GPT * Formatting * Add Reformer * Add Roberta * Add T5 * Add Transformer XL * Fix test * Add XLM + fix XLMForTokenClassification * Style + XLMRoberta * Add XLNet * Formatting * Add doc of return_tuple arg
-
Suraj Parmar authored
-
Sylvain Gugger authored
* Update PretrainedConfig doc * Formatting * Small fixes * Forgotten args and more cleanup
-
Julien Chaumond authored
cc @yjernite
-
Nils Reimers authored
-
Julien Chaumond authored
(hotlinking to image works on GitHub but not on external sites) cc @bashartalafha
-
- 09 Jul, 2020 10 commits
-
-
Teven authored
* Pytorch gpu => cpu proper device * Memoryless XLNet warning + fixed memories during generation * Revert "Memoryless XLNet warning + fixed memories during generation" This reverts commit 3d3251ff * Took the operations on the generated_sequence out of the ensure_device scope
-
Sylvain Gugger authored
-
Stas Bekman authored
-
Lysandre Debut authored
-
Lysandre Debut authored
-
Lysandre authored
-
Lysandre Debut authored
-
Lysandre authored
-
Lysandre Debut authored
* Test XLA examples * Style * Using `require_torch_tpu` * Style * No need for pytest
-
Funtowicz Morgan authored
* Ensure padding and question cannot have higher probs than context. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Add bart the the list of tokenizers adding two <sep> tokens for squad_convert_example_to_feature Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Format. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Addressing @patrickvonplaten comments. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Addressing @patrickvonplaten comments about masking non-context element when generating the answer. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Addressing @sshleifer comments. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Make sure we mask CLS after handling impossible answers Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Mask in the correct vectors ... Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com>
-
- 08 Jul, 2020 4 commits
-
-
Stas Bekman authored
-
Txus authored
* Create README.md Add newly trained `calbert-tiny-uncased` (complete rewrite with SentencePiece) * Add Exbert link * Apply suggestions from code review Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Lorenzo Ampil authored
* Add B I handling to grouping * Add fix to include separate entity as last token * move last_idx definition outside loop * Use first entity in entity group as reference for entity type * Add test cases * Take out extra class accidentally added * Return tf ner grouped test to original * Take out redundant last entity * Get last_idx safely Co-authored-by:
ColleterVi <36503688+ColleterVi@users.noreply.github.com> * Fix first entity comment * Create separate functions for group_sub_entities and group_entities (splitting call method to testable functions) * Take out unnecessary last_idx * Remove additional forward pass test * Move token classification basic tests to separate class * Move token classification basic tests back to monocolumninputtestcase * Move base ner tests to nerpipelinetests * Take out unused kwargs * Add back mandatory_keys argument * Add unitary tests for group_entities in _test_ner_pipeline * Fix last entity handling * Fix grouping fucntion used * Add typing to group_sub_entities and group_entities Co-authored-by:
ColleterVi <36503688+ColleterVi@users.noreply.github.com>
-
Suraj Patil authored
-