- 07 Apr, 2020 2 commits
-
-
Myle Ott authored
-
Julien Chaumond authored
Close #3639 + spurious warning mentioned in #3227 cc @lysandrejik @thomwolf
-
- 06 Apr, 2020 19 commits
-
-
Teven authored
Co-authored-by:TevenLeScao <teven.lescao@gmail.com>
-
Funtowicz Morgan authored
* Renamed num_added_tokens to num_special_tokens_to_add Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Cherry-Pick: Partially fix space only input without special tokens added to the output #3091 Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Added property is_fast on PretrainedTokenizer and PretrainedTokenizerFast Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Make fast tokenizers unittests work on Windows. * Entirely refactored unittest for tokenizers fast. * Remove ABC class for CommonFastTokenizerTest * Added embeded_special_tokens tests from allenai @dirkgr * Make embeded_special_tokens tests from allenai more generic * Uniformize vocab_size as a property for both Fast and normal tokenizers * Move special tokens handling out of PretrainedTokenizer (SpecialTokensMixin) * Ensure providing None input raise the same ValueError than Python tokenizer + tests. * Fix invalid input for assert_padding when testing batch_encode_plus * Move add_special_tokens from constructor to tokenize/encode/[batch_]encode_plus methods parameter. * Ensure tokenize() correctly forward add_special_tokens to rust. * Adding None checking on top on encode / encode_batch for TransfoXLTokenizerFast. Avoid stripping on None values. * unittests ensure tokenize() also throws a ValueError if provided None * Added add_special_tokens unittest for all supported models. * Style * Make sure TransfoXL test run only if PyTorch is provided. * Split up tokenizers tests for each model type. * Fix invalid unittest with new tokenizers API. * Filter out Roberta openai detector models from unittests. * Introduce BatchEncoding on fast tokenizers path. This new structure exposes all the mappings retrieved from Rust. It also keeps the current behavior with model forward. * Introduce BatchEncoding on slow tokenizers path. Backward compatibility. * Improve error message on BatchEncoding for slow path * Make add_prefix_space True by default on Roberta fast to match Python in majority of cases. * Style and format. * Added typing on all methods for PretrainedTokenizerFast * Style and format * Added path for feeding pretokenized (List[str]) input to PretrainedTokenizerFast. * Style and format * encode_plus now supports pretokenized inputs. * Remove user warning about add_special_tokens when working on pretokenized inputs. * Always go through the post processor. * Added support for pretokenized input pairs on encode_plus * Added is_pretokenized flag on encode_plus for clarity and improved error message on input TypeError. * Added pretokenized inputs support on batch_encode_plus * Update BatchEncoding methods name to match Encoding. * Bump setup.py tokenizers dependency to 0.7.0rc1 * Remove unused parameters in BertTokenizerFast * Make sure Roberta returns token_type_ids for unittests. * Added missing typings * Update add_tokens prototype to match tokenizers side and allow AddedToken * Bumping tokenizers to 0.7.0rc2 * Added documentation for BatchEncoding * Added (unused) is_pretokenized parameter on PreTrainedTokenizer encode_plus/batch_encode_plus methods. * Added higher-level typing for tokenize / encode_plus / batch_encode_plus. * Fix unittests failing because add_special_tokens was defined as a constructor parameter on Rust Tokenizers. * Fix text-classification pipeline using the wrong tokenizer * Make pipelines works with BatchEncoding * Turn off add_special_tokens on tokenize by default. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Remove add_prefix_space from tokenize call in unittest. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Style and quality Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Correct message for batch_encode_plus none input exception. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix invalid list comprehension for offset_mapping overriding content every iteration. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * TransfoXL uses Strip normalizer. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Bump tokenizers dependency to 0.7.0rc3 Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Support AddedTokens for special_tokens and use left stripping on mask for Roberta. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * SpecilaTokenMixin can use slots to faster access to underlying attributes. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Remove update_special_tokens from fast tokenizers. * Ensure TransfoXL unittests are run only when torch is available. * Style. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Style * Style
馃檹 馃檹 * Remove slots on SpecialTokensMixin, need deep dive into pickle protocol. * Remove Roberta warning on __init__. * Move documentation to Google style. Co-authored-by:LysandreJik <lysandre.debut@reseau.eseo.fr>
-
Ethan Perez authored
* Fix RoBERTa/XLNet Pad Token in run_multiple_choice.py `convert_examples_to_fes atures` sets `pad_token=0` by default, which is correct for BERT but incorrect for RoBERTa (`pad_token=1`) and XLNet (`pad_token=5`). I think the other arguments to `convert_examples_to_features` are correct, but it might be helpful if someone checked who is more familiar with this part of the codebase. * Simplifying change to match recent commits
-
ktrapeznikov authored
-
Manuel Romero authored
-
Manuel Romero authored
-
Manuel Romero authored
-
Manuel Romero authored
* Add model card * Fix model name in fine-tuning script
-
Manuel Romero authored
* Create model card * Fix model name in fine-tuning script
-
Manuel Romero authored
-
MichalMalyska authored
-
jjacampos authored
* Add model card for BERTeus * Update README
-
Suchin authored
* added model card * updated README * updated README * updated README * added evals * removed pico eval * Tweaks Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Lysandre Debut authored
* Update notebooks * From local to global link * from local links to *actual* global links
-
Julien Chaumond authored
Co-Authored-By:
Kevin Clark <clarkkev@users.noreply.github.com> Co-Authored-By:
Lysandre Debut <lysandre.debut@reseau.eseo.fr>
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
-
Patrick von Platen authored
* split beam search and no beam search test * fix test * clean generate tests
-
- 05 Apr, 2020 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 04 Apr, 2020 7 commits
-
-
Timo Moeller authored
(cherry picked from commit 8e25c4bf2838211378db4d93e7f9722386cc1a04)
-
ktrapeznikov authored
adding readme for ktrapeznikov/albert-xlarge-v2-squad-v2
-
ktrapeznikov authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Manuel Romero authored
-
Julien Chaumond authored
Hat/tip @bhoov @HendrikStrobelt @sebastianGehrmann Also cc @srush and @thomwolf
-
- 03 Apr, 2020 10 commits
-
-
Max Ryabinin authored
* Compile gelu_new with torchscript * Compile _gelu_python with torchscript * Wrap gelu_new with torch.jit for torch>=1.4
-
Lysandre Debut authored
* Electra wip * helpers * Electra wip * Electra v1 * ELECTRA may be saved/loaded * Generator & Discriminator * Embedding size instead of halving the hidden size * ELECTRA Tokenizer * Revert BERT helpers * ELECTRA Conversion script * Archive maps * PyTorch tests * Start fixing tests * Tests pass * Same configuration for both models * Compatible with base + large * Simplification + weight tying * Archives * Auto + Renaming to standard names * ELECTRA is uncased * Tests * Slight API changes * Update tests * wip * ElectraForTokenClassification * temp * Simpler arch + tests Removed ElectraForPreTraining which will be in a script * Conversion script * Auto model * Update links to S3 * Split ElectraForPreTraining and ElectraForTokenClassification * Actually test PreTraining model * Remove num_labels from configuration * wip * wip * From discriminator and generator to electra * Slight API changes * Better naming * TensorFlow ELECTRA tests * Accurate conversion script * Added to conversion script * Fast ELECTRA tokenizer * Style * Add ELECTRA to README * Modeling Pytorch Doc + Real style * TF Docs * Docs * Correct links * Correct model intialized * random fixes * style * Addressing Patrick's and Sam's comments * Correct links in docs
-
Yohei Tamura authored
* BertJapaneseTokenizer accept options for mecab * black * fix mecab_option to Option[str]
-
HUSEIN ZOLKEPLI authored
* add bert bahasa readme * update readme * update readme * added xlnet * added tiny-bert and fix xlnet readme * added albert base
-
ahotrod authored
Update AutoModel & AutoTokernizer loading.
-
ahotrod authored
-
HenrykBorzymowski authored
* added model_cards for polish squad models * corrected mistake in polish design cards Co-authored-by:Henryk Borzymowski <henryk.borzymowski@pwc.com>
-
redewiedergabe authored
* Create README.md * added meta block (language: german) * Added additional information about test data
-
ahotrod authored
-
Henryk Borzymowski authored
-