- 20 Feb, 2020 1 commit
-
-
Sam Shleifer authored
* Results same as fairseq * Wrote a ton of tests * Struggled with api signatures * added some docs
-
- 19 Feb, 2020 3 commits
-
-
Funtowicz Morgan authored
* Correctly return the tuple of generated file(s) when calling save_pretrained Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Quality and format. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co>
-
Funtowicz Morgan authored
* Override build_inputs_with_special_tokens for fast impl + unittest. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Quality + format. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co>
-
Funtowicz Morgan authored
* Implemented fast version of tokenizers Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Bumped tokenizers version requirements to latest 0.2.1 Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Added matching tests Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Matching OpenAI GPT tokenization ! Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Matching GPT2 on tokenizers Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Expose add_prefix_space as constructor parameter for GPT2 Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Matching Roberta tokenization ! Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Removed fast implementation of CTRL. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Binding TransformerXL tokenizers to Rust. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Updating tests accordingly. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Added tokenizers as top-level modules. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Black & isort. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Rename LookupTable to WordLevel to match Rust side. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Black. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Use "fast" suffix instead of "ru" for rust tokenizers implementations. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Introduce tokenize() method on fast tokenizers. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * encode_plus dispatchs to batch_encode_plus Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * batch_encode_plus now dispatchs to encode if there is only one input element. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Bind all the encode_plus parameter to the forwarded batch_encode_plus call. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Bump tokenizers dependency to 0.3.0 Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Formatting. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix tokenization_auto with support for new (python, fast) mapping schema. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Give correct fixtures path in test_tokenization_fast.py for the CLI. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Expose max_len_ properties on BertTokenizerFast Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Move max_len_ properties to PreTrainedTokenizerFast and override in specific subclasses. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * _convert_encoding should keep the batch axis tensor if only one sample in the batch. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Add warning message for RobertaTokenizerFast if used for MLM. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Added use_fast (bool) parameter on AutoTokenizer.from_pretrained(). This allows to easily enable/disable Rust-based tokenizer instantiation. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Let's tokenizers handle all the truncation and padding stuff. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Allow to provide tokenizer arguments during pipeline creation. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Update test_fill_mask pipeline to not use fast tokenizers. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix too much parameters for convert_encoding. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * When enabling padding, max_length should be set to None. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Avoid returning nested tensors of length 1 when calling encode_plus Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Ensure output is padded when return_tensor is not None. Tensor creation requires the inital list input to be of the exact same size. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Disable transfoxl unittest if pytorch is not available (required to load the model) Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * encode_plus should not remove the leading batch axis if return_tensor is set Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Temporary disable fast tokenizers on QA pipelines. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix formatting issues. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Update tokenizers to 0.4.0 * Update style * Enable truncation + stride unit test on fast tokenizers. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Add unittest ensuring special_tokens set match between Python and Rust. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Ensure special_tokens are correctly set during construction. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Give more warning feedback to the user in case of padding without pad_token. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * quality & format. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Added possibility to add a single token as str Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Added unittest for add_tokens and add_special_tokens on fast tokenizers. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix rebase mismatch on pipelines qa default model. QA requires cased input while the tokenizers would be uncased. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Using offset mapping relative to the original string + unittest. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: save_vocabulary requires folder and file name Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Simplify import for Bert. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: truncate_and_pad disables padding according to the same heuristic than the one enabling padding. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Remove private member access in tokenize() Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Bump tokenizers dependency to 0.4.2 Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * format & quality. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Use named arguments when applicable. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Add Github link to Roberta/GPT2 space issue on masked input. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Move max_len_single_sentence / max_len_sentences_pair to PreTrainedTokenizerFast + tests. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Relax type checking to include tuple and list object. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Addressing review comment: Document the truncate_and_pad manager behavior. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Raise an exception if return_offsets_mapping is not available with the current tokenizer. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Ensure padding is set on the tokenizers before setting any padding strategy + unittest. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * On pytorch we need to stack tensor to get proper new axis. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Generalize tests to different framework removing hard written return_tensors="..." Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Bump tokenizer dependency for num_special_tokens_to_add Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Overflowing tokens in batch_encode_plus are now stacked over the batch axis. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Improved error message for padding strategy without pad token. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Bumping tokenizers dependency to 0.5.0 for release. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Optimizing convert_encoding around 4x improvement.
🚀 Signed-off-by:Morgan Funtowicz <morgan@huggingface.co> * expose pad_to_max_length in encode_plus to avoid duplicating the parameters in kwargs Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Generate a proper overflow_to_sampling_mapping when return_overflowing_tokens is True. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix unittests for overflow_to_sampling_mapping not being returned as tensor. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Format & quality. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Remove perfect alignment constraint for Roberta (allowing 1% difference max) Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Triggering final CI Co-authored-by:
MOI Anthony <xn1t0x@gmail.com>
-
- 18 Feb, 2020 1 commit
-
-
Sam Shleifer authored
* Skip flaky test * Style
-
- 13 Feb, 2020 2 commits
-
-
Joe Davison authored
* Preserve spaces in GPT-2 tokenizers Preserves spaces after special tokens in GPT-2 and inhereted (RoBERTa) tokenizers, enabling correct BPE encoding. Automatically inserts a space in front of first token in encode function when adding special tokens. * Add tokenization preprocessing method * Add framework argument to pipeline factory Also fixes pipeline test issue. Each test input now treated as a distinct sequence.
-
Sam Shleifer authored
* activations.py contains a mapping from string to activation function * resolves some `gelu` vs `gelu_new` ambiguity
-
- 11 Feb, 2020 1 commit
-
-
Oleksiy Syvokon authored
PyTorch < 1.3 requires multiplication operands to be of the same type. This was violated when using default attention mask (i.e., attention_mask=None in arguments) given BERT in the decoder mode. In particular, this was breaking Model2Model and made tutorial from the quickstart failing.
-
- 07 Feb, 2020 2 commits
-
-
VictorSanh authored
-
VictorSanh authored
-
- 04 Feb, 2020 8 commits
- 31 Jan, 2020 1 commit
-
-
Lysandre authored
cc @julien-c
-
- 30 Jan, 2020 2 commits
-
-
Julien Chaumond authored
* fill_mask helper * [poc] FillMaskPipeline * Revert "[poc] FillMaskPipeline" This reverts commit 67eeea55b0f97b46c2b828de0f4ee97d87338335. * Revert "fill_mask helper" This reverts commit cacc17b884e14bb6b07989110ffe884ad9e36eaa. * README: clarify that Pipelines can also do text-classification cf. question at the AI&ML meetup last week, @mfuntowicz * Fix test: test feature-extraction pipeline * Test tweaks * Slight refactor of existing pipeline (in preparation of new FillMaskPipeline) * Extraneous doc * More robust way of doing this @mfuntowicz as we don't rely on the model name anymore (see AutoConfig) * Also add RobertaConfig as a quickfix for wrong token_type_ids * cs * [BIG] FillMaskPipeline
-
Lysandre authored
-
- 29 Jan, 2020 2 commits
- 28 Jan, 2020 1 commit
-
-
Lysandre authored
cc @julien-c @thomwolf
-
- 27 Jan, 2020 2 commits
- 23 Jan, 2020 6 commits
- 17 Jan, 2020 1 commit
-
-
Mark Neumann authored
* add failing test * fix call to _run_split_on_punc * format with black
-
- 16 Jan, 2020 3 commits
-
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
model.parameters() order is apparently not stable (only for xlnet, for some reason)
-
- 15 Jan, 2020 3 commits
-
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
- 14 Jan, 2020 1 commit
-
-
Lysandre authored
Created a link between the linear layer bias and the model attribute bias. This does not change anything for the user nor for the conversion scripts, but allows the `resize_token_embeddings` method to resize the bias as well as the weights of the decoder. Added a test.
-