"tools/vscode:/vscode.git/clone" did not exist on "80624de7ca1530b198bb529ee3615372d9f2d826"
- 04 Jun, 2020 4 commits
-
-
Jason Phang authored
-
Lysandre Debut authored
* Codecov setup * Understanding codecov
-
Sam Shleifer authored
-
Funtowicz Morgan authored
* Refactor tensor creation in tokenizers. * Make sure to convert string to TensorType * Refactor convert_to_tensors_ * Introduce numpy tensor creation * Format * Add unittest for TensorType creation from str * sorting imports * Added unittests for numpy tensor conversion. * Do not use in-place version for squeeze as numpy doesn't provide such feature. * Added extra parameter prepend_batch_axis: bool on prepare_for_model. * Ensure test_np_encode_plus_sent_to_model is not executed if encoder/decoder model. * style. * numpy tests require_torch for now while flax not merged. * Hopefully will make flake8 happy. * One more time
馃幎
-
- 03 Jun, 2020 7 commits
-
-
Funtowicz Morgan authored
* Ensure tokens in never_split are not splitted when using basic tokenizer before wordpiece. * never_split only use membership attempt to use a set() which is 10x faster for this operation. * Use union to concatenate two sets. * Updated docstring for never_split parameter. * Avoid set.union() if never_split is None * Added comments. * Correct docstring format.
-
Lysandre Debut authored
-
Patrick von Platen authored
-
Sylvain Gugger authored
* Deprecate masked_lm_labels argument * Apply to all models * Better error message
-
Abhishek Kumar Mishra authored
* Added links to more community notebooks Added links to 3 more community notebooks from the git repo: https://github.com/abhimishra91/transformers-tutorials Different Transformers models are fine tuned on Dataset using PyTorch * Update README.md * Update README.md * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Julien Chaumond authored
* [hf_api] Attach all unknown attributes for future-proof compatibility * [Pipeline] NerPipeline is really a TokenClassificationPipeline * modelcard.py: I don't think we need to force the download * Remove config, tokenizer from SUPPORTED_TASKS as we're moving to one model = one weight + one tokenizer * FillMaskPipeline: also output token in string form * TextClassificationPipeline: option to return all scores, not just the argmax * Update docs/source/main_classes/pipelines.rst
-
David Mezzetti authored
* Create README.md * Create README.md * Create README.md
-
- 02 Jun, 2020 11 commits
-
-
Patrick von Platen authored
* improve handling of short inputs for reformer * correct typo in assert statement * fix other tests
-
Jin Young Sohn authored
* Glue task cleaup * Enable writing cache to cache_dir in case dataset lives in readOnly filesystem. * Differentiate match vs mismatch for MNLI metrics. * Style * Fix pytype * Fix type * Use cache_dir in mnli mismatch eval dataset * Small Tweaks Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Sam Shleifer authored
-
Lysandre authored
-
Julien Chaumond authored
*
馃悰 Fix model ids for BART and Flaubert -
Lysandre authored
-
Julien Chaumond authored
* Kill model archive maps * Fixup * Also kill model_archive_map for MaskedBertPreTrainedModel * Unhook config_archive_map * Tokenizers: align with model id changes * make style && make quality * Fix CI
-
Patrick von Platen authored
* allow to not add special tokens * remove print
-
Funtowicz Morgan authored
-
Lysandre Debut authored
-
Lorenzo Ampil authored
-
- 01 Jun, 2020 18 commits
-
-
Sylvain Gugger authored
-
Lysandre authored
-
Julien Chaumond authored
-
Julien Chaumond authored
Fixes bug reported in https://github.com/huggingface/transformers/issues/4669 See #3967 for context
-
Rens authored
* pass on tokenizer to pipeline * order input names when convert to onnx * update style * remove unused imports * make ordered inputs list needs to be mutable * add test custom bert model * remove unused imports
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Victor SANH authored
Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-