- 06 Jan, 2021 2 commits
-
-
Manuel Romero authored
Co-authored-by:Lysandre Debut <lysandre@huggingface.co>
-
Manuel Romero authored
-
- 25 Dec, 2020 1 commit
-
-
Vasudev Gupta authored
* Created using Colaboratory * mbart-training examples add * link add * Update description Co-authored-by:Suraj Patil <surajp815@gmail.com>
-
- 16 Dec, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 15 Dec, 2020 1 commit
-
-
NielsRogge authored
* First commit: adding all files from tapas_v3 * Fix multiple bugs including soft dependency and new structure of the library * Improve testing by adding torch_device to inputs and adding dependency on scatter * Use Python 3 inheritance rather than Python 2 * First draft model cards of base sized models * Remove model cards as they are already on the hub * Fix multiple bugs with integration tests * All model integration tests pass * Remove print statement * Add test for convert_logits_to_predictions method of TapasTokenizer * Incorporate suggestions by Google authors * Fix remaining tests * Change position embeddings sizes to 512 instead of 1024 * Comment out positional embedding sizes * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES * Added more model names * Fix truncation when no max length is specified * Disable torchscript test * Make style & make quality * Quality * Address CI needs * Test the Masked LM model * Fix the masked LM model * Truncate when overflowing * More much needed docs improvements * Fix some URLs * Some more docs improvements * Test PyTorch scatter * Set to slow + minify * Calm flake8 down * First commit: adding all files from tapas_v3 * Fix multiple bugs including soft dependency and new structure of the library * Improve testing by adding torch_device to inputs and adding dependency on scatter * Use Python 3 inheritance rather than Python 2 * First draft model cards of base sized models * Remove model cards as they are already on the hub * Fix multiple bugs with integration tests * All model integration tests pass * Remove print statement * Add test for convert_logits_to_predictions method of TapasTokenizer * Incorporate suggestions by Google authors * Fix remaining tests * Change position embeddings sizes to 512 instead of 1024 * Comment out positional embedding sizes * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES * Added more model names * Fix truncation when no max length is specified * Disable torchscript test * Make style & make quality * Quality * Address CI needs * Test the Masked LM model * Fix the masked LM model * Truncate when overflowing * More much needed docs improvements * Fix some URLs * Some more docs improvements * Add add_pooling_layer argument to TapasModel Fix comments by @sgugger and @patrickvonplaten * Fix issue in docs + fix style and quality * Clean up conversion script and add task parameter to TapasConfig * Revert the task parameter of TapasConfig Some minor fixes * Improve conversion script and add test for absolute position embeddings * Improve conversion script and add test for absolute position embeddings * Fix bug with reset_position_index_per_cell arg of the conversion cli * Add notebooks to the examples directory and fix style and quality * Apply suggestions from code review * Move from `nielsr/` to `google/` namespace * Apply Sylvain's comments Co-authored-by:
sgugger <sylvain.gugger@gmail.com> Co-authored-by:
Rogge Niels <niels.rogge@howest.be> Co-authored-by:
LysandreJik <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
sgugger <sylvain.gugger@gmail.com>
-
- 07 Dec, 2020 1 commit
-
-
Sylvain Gugger authored
* Add copyright everywhere missing * Style
-
- 02 Nov, 2020 2 commits
-
-
Patrick von Platen authored
-
Martin Monperrus authored
-
- 22 Oct, 2020 2 commits
-
-
Peter Bayerle authored
Looking at the current community notebooks, it seems that few are targeted for absolute beginners and even fewer are written with TensorFlow. This notebook describes absolutely everything a beginner would need to know, including how to save/load their model and use it for new predictions (this is often omitted in tutorials) Co-authored-by:Lysandre Debut <lysandre@huggingface.co>
-
zolekode authored
* added qg evaluation notebook * Update notebooks/README.md Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 05 Oct, 2020 1 commit
-
-
Dhaval Taunk authored
-
- 01 Oct, 2020 1 commit
-
-
Muhammad Harris authored
* t5 t5 community notebook added * author link updated * t5 t5 community notebook added * author link updated * new colab link updated Co-authored-by:harris <muhammad.harris@visionx.io>
-
- 21 Sep, 2020 1 commit
-
-
Nadir El Manouzi authored
-
- 17 Sep, 2020 1 commit
-
-
Dhaval Taunk authored
* added multilabel classification using distilbert notebook to community notebooks * added multilabel classification using distilbert notebook to community notebooks
-
- 08 Sep, 2020 1 commit
-
-
Philipp Schmid authored
-
- 08 Aug, 2020 1 commit
-
-
elsanns authored
Co-authored-by:eliska <3648991+elisans@users.noreply.github.com>
-
- 28 Jul, 2020 1 commit
-
-
Tanmay Thakur authored
Signed-off-by:lordtt13 <thakurtanmay72@yahoo.com>
-
- 10 Jul, 2020 1 commit
-
-
Patrick von Platen authored
-
- 01 Jul, 2020 1 commit
-
-
Patrick von Platen authored
* Add Reformer MLM notebook * Update notebooks/README.md
-
- 26 Jun, 2020 1 commit
-
-
Patrick von Platen authored
* add notebook * Cr茅茅 avec Colaboratory * move notebook to correct folder * correct link * correct filename * correct filename * better name
-
- 24 Jun, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 22 Jun, 2020 1 commit
-
-
Micha毛l Benesty authored
* Add link to new comunity notebook (optimization) related to https://github.com/huggingface/transformers/issues/4842#event-3469184635 This notebook is about benchmarking model training with/without dynamic padding optimization. https://github.com/ELS-RD/transformers-notebook Using dynamic padding on MNLI provides a **4.7 times training time reduction**, with max pad length set to 512. The effect is strong because few examples are >> 400 tokens in this dataset. IRL, it will depend of the dataset, but it always bring improvement and, after more than 20 experiments listed in this [article](https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e?source=friends_link&sk=10a45a0ace94b3255643d81b6475f409 ), it seems to not hurt performance. Following advice from @patrickvonplaten I do the PR myself :-) * Update notebooks/README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 03 Jun, 2020 1 commit
-
-
Abhishek Kumar Mishra authored
* Added links to more community notebooks Added links to 3 more community notebooks from the git repo: https://github.com/abhimishra91/transformers-tutorials Different Transformers models are fine tuned on Dataset using PyTorch * Update README.md * Update README.md * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Jun, 2020 1 commit
-
-
Lorenzo Ampil authored
-
- 29 May, 2020 2 commits
-
-
Patrick von Platen authored
-
Iz Beltagy authored
* fix longformer model names in examples * a better name for the notebook
-
- 28 May, 2020 3 commits
-
-
Iz Beltagy authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Suraj Patil authored
-
Lavanya Shukla authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 26 May, 2020 1 commit
-
-
ohmeow authored
* adding BART summarization how-to community notebook * Update notebooks/README.md Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 22 May, 2020 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 20 May, 2020 1 commit
-
-
Nathan Cooper authored
-
- 19 May, 2020 1 commit
-
-
Suraj Patil authored
* add T5 fine-tuning notebook [Community notebooks] * Update README.md Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 May, 2020 1 commit
-
-
Patrick von Platen authored
* Update README.md * Update README.md * Update README.md * Update README.md
-
- 14 May, 2020 2 commits
-
-
Morgan Funtowicz authored
-
Funtowicz Morgan authored
* Added generic ONNX conversion script for PyTorch model. * WIP initial TF support. * TensorFlow/Keras ONNX export working. * Print framework version info * Add possibility to check the model is correctly loading on ONNX runtime. * Remove quantization option. * Specify ONNX opset version when exporting. * Formatting. * Remove unused imports. * Make functions more generally reusable from other part of the code. * isort happy. * flake happy * Export only feature-extraction for now * Correctly check inputs order / filter before export. * Removed task variable * Fix invalid args call in load_graph_from_args. * Fix invalid args call in convert. * Fix invalid args call in infer_shapes. * Raise exception and catch in caller function instead of exit. * Add 04-onnx-export.ipynb notebook * More WIP on the notebook * Remove unused imports * Simplify & remove unused constants. * Export with constant_folding in PyTorch * Let's try to put function args in the right order this time ... * Disable external_data_format temporary * ONNX notebook draft ready. * Updated notebooks charts + wording * Correct error while exporting last chart in notebook. * Adressing @LysandreJik comment. * Set ONNX opset to 11 as default value. * Set opset param mandatory * Added ONNX export unittests * Quality. * flake8 happy * Add keras2onnx dependency on extras["tf"] * Pin keras2onnx on github master to v1.6.5 * Second attempt. * Third attempt. * Use the right repo URL this time ... * Do the same for onnxconverter-common * Added keras2onnx and onnxconveter-common to 1.7.0 to supports TF2.2 * Correct commit hash. * Addressing PR review: Optimization are enabled by default. * Addressing PR review: small changes in the notebook * setup.py comment about keras2onnx versioning.
-
- 06 Apr, 2020 1 commit
-
-
Lysandre Debut authored
* Update notebooks * From local to global link * from local links to *actual* global links
-
- 19 Mar, 2020 1 commit
-
-
Kyeongpil Kang authored
For the tutorial of "How to generate text", the URL link was wrong (it was linked to the tutorial of "How to train a language model"). I fixed the URL.
-
- 18 Mar, 2020 1 commit
-
-
Patrick von Platen authored
-