- 19 Aug, 2021 1 commit
-
-
Allan Lin authored
* Update torch.utils.data namespaces to the latest. * Format * Update Dataloader. * Style
-
- 18 Aug, 2021 2 commits
-
-
Jannis Vamvas authored
-
Patrick von Platen authored
* up * up
-
- 17 Aug, 2021 2 commits
-
-
Ori Ram authored
* splinter template * initialize splinter classes * Splinter Tokenizer * splinter.rst * tokenization fixes * Documentation & some minor variable name changes * bug fix (added back question_token_id to config) + variable names * Minor bug fixes + variable name changes * Fix Splinter references after merge with new transformers * changes after running make style & quality * Fix documentation unindent * Fix doc indentation in tokenization_splinter * Fix also SplinterTokenizerFast * Add Splinter to index.rst and README * Fixdouble whitespace from index.rst * Fixed index.rst with 'make fix-copies' * Update docs/source/model_doc/splinter.rst Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docs/source/model_doc/splinter.rst Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docs/source/model_doc/splinter.rst Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docs/source/model_doc/splinter.rst Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update src/transformers/models/splinter/__init__.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Added "copied from BERT" comments * Removing unnexessary code from modeling_splinter * Update README.md Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/splinter/configuration_splinter.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Remove references to TF modeling from splinter * Update src/transformers/models/splinter/modeling_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Remove unnecessary check * Update src/transformers/models/splinter/modeling_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Add differences between Splinter and Bert tokenizers * Update src/transformers/models/splinter/modeling_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/splinter/tokenization_splinter_fast.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Remove unnecessary check * Doc formatting * Update src/transformers/models/splinter/tokenization_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/models/splinter/tokenization_splinter.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * bug fix: remove load_tf_weights attribute * Some minor quality changes * Update docs/source/model_doc/splinter.rst Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/splinter/configuration_splinter.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Change FullyConnectedLayer to SplinterFullyConnectedLayer * Variable naming * Reove gather_positions function * Remove ClassificationHead as it's outdated * Update src/transformers/models/splinter/modeling_splinter.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Remove hardcoded 102 token id * Minor style change * Added "tau" organization to all model identifiers & URLS * Added tau to the tests as well * Copy-from comments * Removed all unnecessary classes (e.g. SplinterForMaskedLM) * Running make fix-copies * Bug fix: Further removed unnecessary classes * Add Splinter to AutoTokenization * Add an integration test for Splinter * Removed initialize_new_qass from config - It will be done through different checkpoints * Removed `initialize_new_qass` from documentation as well * Added new checkpoint names (`tau/splinter-base-qass` and same for large) in the code * Minor change to test * SplinterTokenizer now doesn't abstract from BertTokenizer * SplinterTokenizerFast also dosn't abstract from Bert * style and quality * bug fix: import ing torch in tests only if it's available * Auto mappings * Changed copyrights in Splinter's files * Update src/transformers/models/splinter/configuration_splinter.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
yuvalkirstain <kirstain.yuval@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Nicolas Patry authored
* Starting to optimize ByT5. * Making ByT5Tokenizer faster. * Even faster. * Cleaning up.
-
- 16 Aug, 2021 7 commits
-
-
sararb authored
-
Lysandre Debut authored
* Continue on error * Specific * Temporary patch
-
Patrick von Platen authored
-
Omar Sanseviero authored
* Fix frameworks table so it's alphabetical * Update index.rst * Don't differentiate when sorting between upper and lower case
-
Lysandre authored
-
Lysandre authored
-
weierstrass_walker authored
-
- 13 Aug, 2021 7 commits
-
-
Omar Sanseviero authored
-
Minwoo Lee authored
* Fix omitted lazy import for xlm-prophetnet * Update src/transformers/models/xlm_prophetnet/__init__.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fix style using black Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Nicolas Patry authored
* Fill mask pipelines test updates. * Model eval !! * Adding slow test with actual values. * Making all tests pass (skipping quite a bit.) * Doc styling. * Better doc cleanup. * Making an explicit test with no pad token tokenizer. * Typo.
-
Yih-Dar authored
* Fix inconsistency of the last element in hidden_states between PyTorch/Flax GPT2(Neo) (#13102) * Fix missing elements in outputs tuple * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Fix local variable 'all_hidden_states' referenced before assignment * Fix by returning tuple containing None values * Fix quality Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Will Frey authored
* Create py.typed This creates a [py.typed as per PEP 561](https://www.python.org/dev/peps/pep-0561/#packaging-type-information) that should be distributed to mark that the package includes (inline) type annotations. * Update setup.py Include py.typed as package data * Update setup.py Call `setup(...)` with `zip_safe=False`.
-
Sylvain Gugger authored
-
Gunjan Chhablani authored
* Fix VisualBERT docs * Show example notebooks as lists * Fix style
-
- 12 Aug, 2021 13 commits
-
-
Bill Schnurr authored
* conditional declare `TOKENIZER_MAPPING_NAMES` within a `if TYPE_CHECKING` block so that type checkers dont need to evaluate the RHS of the assignment. this improves performance of the pylance/pyright type checkers * Update src/transformers/models/auto/tokenization_auto.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * adding missing import * format Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
* Only report failures on failures * Fix typo * Put it everywhere
-
Suraj Patil authored
* allow passing params to image and text feature method * ifx for hybrid clip as well
-
Sylvain Gugger authored
* Remove hf_api module and use hugginface_hub * Style * Fix to test_fetcher * Quality
-
Patrick von Platen authored
* up * up * up
-
Yih-Dar authored
* Change FlaxBartForConditionalGeneration.decode() argument: deterministic -> train * Also change the parameter name to train for flax marian and mbart Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Reactive test fecthers on scheduled test with proper git install * Proper fetch-depth
-
Sylvain Gugger authored
-
Kamal Raj authored
* TFDeberta moved weights to build and fixed name scope added missing , bug fixes to enable graph mode execution updated setup.py fixing typo fix imports embedding mask fix added layer names avoid autmatic incremental names +XSoftmax cleanup added names to layer disable keras_serializable Distangled attention output shape hidden_size==None using symbolic inputs test for Deberta tf make style Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> removed tensorflow-probability removed blank line * removed tf experimental api +torch_gather tf implementation from @Rocketknight1 * layername DeBERTa --> deberta * copyright fix * added docs for TFDeberta & make style * layer_name change to fix load from pt model * layer_name change as pt model * SequenceClassification layername change, to same as pt model * switched to keras built-in LayerNormalization * added `TFDeberta` prefix most layer classes * updated to tf.Tensor in the docstring
-
Gunjan Chhablani authored
-
Lysandre Debut authored
* Doctests * Limit to 4 decimals * Try with separate PT/TF tests * Remove test for TF * Ellips the predictions * Doctest continue on failure Co-authored-by:Sylvain Gugger <sylvain.gugger@gmail.com>
-
Ibraheem Moosa authored
Classification head of AlbertForMultipleChoice uses `hidden_dropout_prob` instead of `classifier_dropout_prob`. This is not desirable as we cannot change classifer head dropout probability without changing the dropout probabilities of the whole model.
-
- 11 Aug, 2021 3 commits
-
-
Lysandre Debut authored
* Install git * Add TF tests * And last TF test * Add in commented code too Co-authored-by:Sylvain Gugger <sylvain.gugger@gmail.com>
-
Gunjan Chhablani authored
* Initialize VisualBERT demo * Update demo * Add commented URL * Update README * Update README
-
Sylvain Gugger authored
* Fix doctests for quicktour * Adapt causal LM exemple * Remove space * Fix until summarization * End of task summary * Style * With last changes in quicktour
-
- 10 Aug, 2021 5 commits
-
-
Sylvain Gugger authored
-
Ibraheem Moosa authored
* Use original key for label in DataCollatorForTokenClassification DataCollatorForTokenClassification accepts either `label` or `labels` as key for label in it's input. However after padding the label it assigns the padded labels to key `labels`. If originally `label` was used as key than the original upadded labels still remains in the batch. Then at line 192 when we try to convert the batch elements to torch tensor than these original unpadded labels cannot be converted as the labels for different samples have different lengths. * Fixed style.
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-