- 12 Jan, 2021 1 commit
-
-
NielsRogge authored
* Add LayoutLMForSequenceClassification and integration tests Improve docs Add LayoutLM notebook to list of community notebooks * Make style & quality * Address comments by @sgugger, @patrickvonplaten and @LysandreJik * Fix rebase with master * Reformat in one line * Improve code examples as requested by @patrickvonplaten Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 11 Jan, 2021 1 commit
-
-
- 06 Jan, 2021 3 commits
-
-
NielsRogge authored
-
Manuel Romero authored
Co-authored-by:Lysandre Debut <lysandre@huggingface.co>
-
Manuel Romero authored
-
- 25 Dec, 2020 1 commit
-
-
Vasudev Gupta authored
* Created using Colaboratory * mbart-training examples add * link add * Update description Co-authored-by:Suraj Patil <surajp815@gmail.com>
-
- 16 Dec, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 15 Dec, 2020 1 commit
-
-
NielsRogge authored
* First commit: adding all files from tapas_v3 * Fix multiple bugs including soft dependency and new structure of the library * Improve testing by adding torch_device to inputs and adding dependency on scatter * Use Python 3 inheritance rather than Python 2 * First draft model cards of base sized models * Remove model cards as they are already on the hub * Fix multiple bugs with integration tests * All model integration tests pass * Remove print statement * Add test for convert_logits_to_predictions method of TapasTokenizer * Incorporate suggestions by Google authors * Fix remaining tests * Change position embeddings sizes to 512 instead of 1024 * Comment out positional embedding sizes * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES * Added more model names * Fix truncation when no max length is specified * Disable torchscript test * Make style & make quality * Quality * Address CI needs * Test the Masked LM model * Fix the masked LM model * Truncate when overflowing * More much needed docs improvements * Fix some URLs * Some more docs improvements * Test PyTorch scatter * Set to slow + minify * Calm flake8 down * First commit: adding all files from tapas_v3 * Fix multiple bugs including soft dependency and new structure of the library * Improve testing by adding torch_device to inputs and adding dependency on scatter * Use Python 3 inheritance rather than Python 2 * First draft model cards of base sized models * Remove model cards as they are already on the hub * Fix multiple bugs with integration tests * All model integration tests pass * Remove print statement * Add test for convert_logits_to_predictions method of TapasTokenizer * Incorporate suggestions by Google authors * Fix remaining tests * Change position embeddings sizes to 512 instead of 1024 * Comment out positional embedding sizes * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES * Added more model names * Fix truncation when no max length is specified * Disable torchscript test * Make style & make quality * Quality * Address CI needs * Test the Masked LM model * Fix the masked LM model * Truncate when overflowing * More much needed docs improvements * Fix some URLs * Some more docs improvements * Add add_pooling_layer argument to TapasModel Fix comments by @sgugger and @patrickvonplaten * Fix issue in docs + fix style and quality * Clean up conversion script and add task parameter to TapasConfig * Revert the task parameter of TapasConfig Some minor fixes * Improve conversion script and add test for absolute position embeddings * Improve conversion script and add test for absolute position embeddings * Fix bug with reset_position_index_per_cell arg of the conversion cli * Add notebooks to the examples directory and fix style and quality * Apply suggestions from code review * Move from `nielsr/` to `google/` namespace * Apply Sylvain's comments Co-authored-by:
sgugger <sylvain.gugger@gmail.com> Co-authored-by:
Rogge Niels <niels.rogge@howest.be> Co-authored-by:
LysandreJik <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
sgugger <sylvain.gugger@gmail.com>
-
- 09 Dec, 2020 1 commit
-
-
cronoik authored
-
- 07 Dec, 2020 1 commit
-
-
Sylvain Gugger authored
* Add copyright everywhere missing * Style
-
- 23 Nov, 2020 1 commit
-
-
Jessica Yung authored
* Add pip install update to resolve import error Add pip install upgrade tensorflow-gpu to remove error below: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-2-094fadb93f3f> in <module>() 1 import torch ----> 2 from transformers import AutoModel, AutoTokenizer, BertTokenizer 3 4 torch.set_grad_enabled(False) 4 frames /usr/local/lib/python3.6/dist-packages/transformers/__init__.py in <module>() 133 134 # Pipelines --> 135 from .pipelines import ( 136 Conversation, 137 ConversationalPipeline, /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <module>() 46 import tensorflow as tf 47 ---> 48 from .modeling_tf_auto import ( 49 TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, 50 TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py in <module>() 49 from .configuration_utils import PretrainedConfig 50 from .file_utils import add_start_docstrings ---> 51 from .modeling_tf_albert import ( 52 TFAlbertForMaskedLM, 53 TFAlbertForMultipleChoice, /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_albert.py in <module>() 22 import tensorflow as tf 23 ---> 24 from .activations_tf import get_tf_activation 25 from .configuration_albert import AlbertConfig 26 from .file_utils import ( /usr/local/lib/python3.6/dist-packages/transformers/activations_tf.py in <module>() 52 "gelu": tf.keras.layers.Activation(gelu), 53 "relu": tf.keras.activations.relu, ---> 54 "swish": tf.keras.activations.swish, 55 "silu": tf.keras.activations.swish, 56 "gelu_new": tf.keras.layers.Activation(gelu_new), AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish' ``` I have tried running the colab after this change and it seems to work fine (all the cells run with no errors). * Update notebooks/02-transformers.ipynb only need to upgrade tensorflow, not tensorflow-gpu. Co-authored-by:Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 02 Nov, 2020 2 commits
-
-
Patrick von Platen authored
-
Martin Monperrus authored
-
- 22 Oct, 2020 2 commits
-
-
Peter Bayerle authored
Looking at the current community notebooks, it seems that few are targeted for absolute beginners and even fewer are written with TensorFlow. This notebook describes absolutely everything a beginner would need to know, including how to save/load their model and use it for new predictions (this is often omitted in tutorials) Co-authored-by:Lysandre Debut <lysandre@huggingface.co>
-
zolekode authored
* added qg evaluation notebook * Update notebooks/README.md Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Oct, 2020 1 commit
-
-
Sam Shleifer authored
-
- 05 Oct, 2020 1 commit
-
-
Dhaval Taunk authored
-
- 01 Oct, 2020 1 commit
-
-
Muhammad Harris authored
* t5 t5 community notebook added * author link updated * t5 t5 community notebook added * author link updated * new colab link updated Co-authored-by:harris <muhammad.harris@visionx.io>
-
- 21 Sep, 2020 1 commit
-
-
Nadir El Manouzi authored
-
- 17 Sep, 2020 1 commit
-
-
Dhaval Taunk authored
* added multilabel classification using distilbert notebook to community notebooks * added multilabel classification using distilbert notebook to community notebooks
-
- 08 Sep, 2020 1 commit
-
-
Philipp Schmid authored
-
- 31 Aug, 2020 1 commit
-
-
Funtowicz Morgan authored
* Update ONNX notebook to include section on quantization. Signed-off-by:Morgan Funtowicz <morgan@huggingface.co> * Addressing ONNX team comments
-
- 30 Aug, 2020 1 commit
-
-
Thomas Ashish Cherian authored
-
- 20 Aug, 2020 1 commit
-
-
Siddharth Jain authored
-
- 08 Aug, 2020 1 commit
-
-
elsanns authored
Co-authored-by:eliska <3648991+elisans@users.noreply.github.com>
-
- 28 Jul, 2020 1 commit
-
-
Tanmay Thakur authored
Signed-off-by:lordtt13 <thakurtanmay72@yahoo.com>
-
- 10 Jul, 2020 1 commit
-
-
Patrick von Platen authored
-
- 08 Jul, 2020 2 commits
-
-
Patrick von Platen authored
* Cr茅茅 avec Colaboratory * delete old file
-
Patrick von Platen authored
* tf_train * adapt timing for tpu * fix timing * fix timing * fix timing * fix timing * update notebook * add tests
-
- 01 Jul, 2020 1 commit
-
-
Patrick von Platen authored
* Add Reformer MLM notebook * Update notebooks/README.md
-
- 29 Jun, 2020 1 commit
-
-
Patrick von Platen authored
* first doc version * add benchmark docs * fix typos * improve README * Update docs/source/benchmarks.rst Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * fix naming and docs Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 26 Jun, 2020 2 commits
-
-
Thomas Wolf authored
* remove references to old API in docstring - update data processors * style * fix tests - better type checking error messages * better type checking * include awesome fix by @LysandreJik for #5310 * updated doc and examples
-
Patrick von Platen authored
* add notebook * Cr茅茅 avec Colaboratory * move notebook to correct folder * correct link * correct filename * correct filename * better name
-
- 24 Jun, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 22 Jun, 2020 1 commit
-
-
Micha毛l Benesty authored
* Add link to new comunity notebook (optimization) related to https://github.com/huggingface/transformers/issues/4842#event-3469184635 This notebook is about benchmarking model training with/without dynamic padding optimization. https://github.com/ELS-RD/transformers-notebook Using dynamic padding on MNLI provides a **4.7 times training time reduction**, with max pad length set to 512. The effect is strong because few examples are >> 400 tokens in this dataset. IRL, it will depend of the dataset, but it always bring improvement and, after more than 20 experiments listed in this [article](https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e?source=friends_link&sk=10a45a0ace94b3255643d81b6475f409 ), it seems to not hurt performance. Following advice from @patrickvonplaten I do the PR myself :-) * Update notebooks/README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Jun, 2020 1 commit
-
-
Pri Oberoi authored
* Add missing arg when creating model * Fix typos * Remove from_tf flag when creating model
-
- 03 Jun, 2020 1 commit
-
-
Abhishek Kumar Mishra authored
* Added links to more community notebooks Added links to 3 more community notebooks from the git repo: https://github.com/abhimishra91/transformers-tutorials Different Transformers models are fine tuned on Dataset using PyTorch * Update README.md * Update README.md * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Jun, 2020 1 commit
-
-
Lorenzo Ampil authored
-
- 29 May, 2020 2 commits
-
-
Patrick von Platen authored
-
Iz Beltagy authored
* fix longformer model names in examples * a better name for the notebook
-