- 07 Dec, 2020 1 commit
-
-
Sylvain Gugger authored
* Add copyright everywhere missing * Style
-
- 23 Nov, 2020 1 commit
-
-
Jessica Yung authored
* Add pip install update to resolve import error Add pip install upgrade tensorflow-gpu to remove error below: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-2-094fadb93f3f> in <module>() 1 import torch ----> 2 from transformers import AutoModel, AutoTokenizer, BertTokenizer 3 4 torch.set_grad_enabled(False) 4 frames /usr/local/lib/python3.6/dist-packages/transformers/__init__.py in <module>() 133 134 # Pipelines --> 135 from .pipelines import ( 136 Conversation, 137 ConversationalPipeline, /usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <module>() 46 import tensorflow as tf 47 ---> 48 from .modeling_tf_auto import ( 49 TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, 50 TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_auto.py in <module>() 49 from .configuration_utils import PretrainedConfig 50 from .file_utils import add_start_docstrings ---> 51 from .modeling_tf_albert import ( 52 TFAlbertForMaskedLM, 53 TFAlbertForMultipleChoice, /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_albert.py in <module>() 22 import tensorflow as tf 23 ---> 24 from .activations_tf import get_tf_activation 25 from .configuration_albert import AlbertConfig 26 from .file_utils import ( /usr/local/lib/python3.6/dist-packages/transformers/activations_tf.py in <module>() 52 "gelu": tf.keras.layers.Activation(gelu), 53 "relu": tf.keras.activations.relu, ---> 54 "swish": tf.keras.activations.swish, 55 "silu": tf.keras.activations.swish, 56 "gelu_new": tf.keras.layers.Activation(gelu_new), AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish' ``` I have tried running the colab after this change and it seems to work fine (all the cells run with no errors). * Update notebooks/02-transformers.ipynb only need to upgrade tensorflow, not tensorflow-gpu. Co-authored-by:Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 02 Nov, 2020 2 commits
-
-
Patrick von Platen authored
-
Martin Monperrus authored
-
- 22 Oct, 2020 2 commits
-
-
Peter Bayerle authored
Looking at the current community notebooks, it seems that few are targeted for absolute beginners and even fewer are written with TensorFlow. This notebook describes absolutely everything a beginner would need to know, including how to save/load their model and use it for new predictions (this is often omitted in tutorials) Co-authored-by:Lysandre Debut <lysandre@huggingface.co>
-
zolekode authored
* added qg evaluation notebook * Update notebooks/README.md Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Oct, 2020 1 commit
-
-
Sam Shleifer authored
-
- 05 Oct, 2020 1 commit
-
-
Dhaval Taunk authored
-
- 01 Oct, 2020 1 commit
-
-
Muhammad Harris authored
* t5 t5 community notebook added * author link updated * t5 t5 community notebook added * author link updated * new colab link updated Co-authored-by:harris <muhammad.harris@visionx.io>
-
- 21 Sep, 2020 1 commit
-
-
Nadir El Manouzi authored
-
- 17 Sep, 2020 1 commit
-
-
Dhaval Taunk authored
* added multilabel classification using distilbert notebook to community notebooks * added multilabel classification using distilbert notebook to community notebooks
-
- 08 Sep, 2020 1 commit
-
-
Philipp Schmid authored
-
- 31 Aug, 2020 1 commit
-
-
Funtowicz Morgan authored
* Update ONNX notebook to include section on quantization. Signed-off-by:Morgan Funtowicz <morgan@huggingface.co> * Addressing ONNX team comments
-
- 30 Aug, 2020 1 commit
-
-
Thomas Ashish Cherian authored
-
- 20 Aug, 2020 1 commit
-
-
Siddharth Jain authored
-
- 08 Aug, 2020 1 commit
-
-
elsanns authored
Co-authored-by:eliska <3648991+elisans@users.noreply.github.com>
-
- 28 Jul, 2020 1 commit
-
-
Tanmay Thakur authored
Signed-off-by:lordtt13 <thakurtanmay72@yahoo.com>
-
- 10 Jul, 2020 1 commit
-
-
Patrick von Platen authored
-
- 08 Jul, 2020 2 commits
-
-
Patrick von Platen authored
* Cr茅茅 avec Colaboratory * delete old file
-
Patrick von Platen authored
* tf_train * adapt timing for tpu * fix timing * fix timing * fix timing * fix timing * update notebook * add tests
-
- 01 Jul, 2020 1 commit
-
-
Patrick von Platen authored
* Add Reformer MLM notebook * Update notebooks/README.md
-
- 29 Jun, 2020 1 commit
-
-
Patrick von Platen authored
* first doc version * add benchmark docs * fix typos * improve README * Update docs/source/benchmarks.rst Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * fix naming and docs Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 26 Jun, 2020 2 commits
-
-
Thomas Wolf authored
* remove references to old API in docstring - update data processors * style * fix tests - better type checking error messages * better type checking * include awesome fix by @LysandreJik for #5310 * updated doc and examples
-
Patrick von Platen authored
* add notebook * Cr茅茅 avec Colaboratory * move notebook to correct folder * correct link * correct filename * correct filename * better name
-
- 24 Jun, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 22 Jun, 2020 1 commit
-
-
Micha毛l Benesty authored
* Add link to new comunity notebook (optimization) related to https://github.com/huggingface/transformers/issues/4842#event-3469184635 This notebook is about benchmarking model training with/without dynamic padding optimization. https://github.com/ELS-RD/transformers-notebook Using dynamic padding on MNLI provides a **4.7 times training time reduction**, with max pad length set to 512. The effect is strong because few examples are >> 400 tokens in this dataset. IRL, it will depend of the dataset, but it always bring improvement and, after more than 20 experiments listed in this [article](https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e?source=friends_link&sk=10a45a0ace94b3255643d81b6475f409 ), it seems to not hurt performance. Following advice from @patrickvonplaten I do the PR myself :-) * Update notebooks/README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Jun, 2020 1 commit
-
-
Pri Oberoi authored
* Add missing arg when creating model * Fix typos * Remove from_tf flag when creating model
-
- 03 Jun, 2020 1 commit
-
-
Abhishek Kumar Mishra authored
* Added links to more community notebooks Added links to 3 more community notebooks from the git repo: https://github.com/abhimishra91/transformers-tutorials Different Transformers models are fine tuned on Dataset using PyTorch * Update README.md * Update README.md * Update README.md Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 02 Jun, 2020 1 commit
-
-
Lorenzo Ampil authored
-
- 29 May, 2020 2 commits
-
-
Patrick von Platen authored
-
Iz Beltagy authored
* fix longformer model names in examples * a better name for the notebook
-
- 28 May, 2020 3 commits
-
-
Iz Beltagy authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Suraj Patil authored
-
Lavanya Shukla authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 26 May, 2020 1 commit
-
-
ohmeow authored
* adding BART summarization how-to community notebook * Update notebooks/README.md Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 22 May, 2020 2 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 20 May, 2020 1 commit
-
-
Nathan Cooper authored
-
- 19 May, 2020 1 commit
-
-
Suraj Patil authored
* add T5 fine-tuning notebook [Community notebooks] * Update README.md Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 May, 2020 1 commit
-
-
Funtowicz Morgan authored
* Adding optimizations block from ONNXRuntime. * Turn off external data format by default for PyTorch export. * Correct the way use_external_format is passed through the cmdline args.
-