- 16 Aug, 2021 7 commits
-
-
sararb authored
-
Lysandre Debut authored
* Continue on error * Specific * Temporary patch
-
Patrick von Platen authored
-
Omar Sanseviero authored
* Fix frameworks table so it's alphabetical * Update index.rst * Don't differentiate when sorting between upper and lower case
-
Lysandre authored
-
Lysandre authored
-
weierstrass_walker authored
-
- 13 Aug, 2021 7 commits
-
-
Omar Sanseviero authored
-
Minwoo Lee authored
* Fix omitted lazy import for xlm-prophetnet * Update src/transformers/models/xlm_prophetnet/__init__.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fix style using black Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Nicolas Patry authored
* Fill mask pipelines test updates. * Model eval !! * Adding slow test with actual values. * Making all tests pass (skipping quite a bit.) * Doc styling. * Better doc cleanup. * Making an explicit test with no pad token tokenizer. * Typo.
-
Yih-Dar authored
* Fix inconsistency of the last element in hidden_states between PyTorch/Flax GPT2(Neo) (#13102) * Fix missing elements in outputs tuple * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Fix local variable 'all_hidden_states' referenced before assignment * Fix by returning tuple containing None values * Fix quality Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
Will Frey authored
* Create py.typed This creates a [py.typed as per PEP 561](https://www.python.org/dev/peps/pep-0561/#packaging-type-information) that should be distributed to mark that the package includes (inline) type annotations. * Update setup.py Include py.typed as package data * Update setup.py Call `setup(...)` with `zip_safe=False`.
-
Sylvain Gugger authored
-
Gunjan Chhablani authored
* Fix VisualBERT docs * Show example notebooks as lists * Fix style
-
- 12 Aug, 2021 13 commits
-
-
Bill Schnurr authored
* conditional declare `TOKENIZER_MAPPING_NAMES` within a `if TYPE_CHECKING` block so that type checkers dont need to evaluate the RHS of the assignment. this improves performance of the pylance/pyright type checkers * Update src/transformers/models/auto/tokenization_auto.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * adding missing import * format Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sylvain Gugger authored
* Only report failures on failures * Fix typo * Put it everywhere
-
Suraj Patil authored
* allow passing params to image and text feature method * ifx for hybrid clip as well
-
Sylvain Gugger authored
* Remove hf_api module and use hugginface_hub * Style * Fix to test_fetcher * Quality
-
Patrick von Platen authored
* up * up * up
-
Yih-Dar authored
* Change FlaxBartForConditionalGeneration.decode() argument: deterministic -> train * Also change the parameter name to train for flax marian and mbart Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Reactive test fecthers on scheduled test with proper git install * Proper fetch-depth
-
Sylvain Gugger authored
-
Kamal Raj authored
* TFDeberta moved weights to build and fixed name scope added missing , bug fixes to enable graph mode execution updated setup.py fixing typo fix imports embedding mask fix added layer names avoid autmatic incremental names +XSoftmax cleanup added names to layer disable keras_serializable Distangled attention output shape hidden_size==None using symbolic inputs test for Deberta tf make style Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Update src/transformers/models/deberta/modeling_tf_deberta.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> removed tensorflow-probability removed blank line * removed tf experimental api +torch_gather tf implementation from @Rocketknight1 * layername DeBERTa --> deberta * copyright fix * added docs for TFDeberta & make style * layer_name change to fix load from pt model * layer_name change as pt model * SequenceClassification layername change, to same as pt model * switched to keras built-in LayerNormalization * added `TFDeberta` prefix most layer classes * updated to tf.Tensor in the docstring
-
Gunjan Chhablani authored
-
Lysandre Debut authored
* Doctests * Limit to 4 decimals * Try with separate PT/TF tests * Remove test for TF * Ellips the predictions * Doctest continue on failure Co-authored-by:Sylvain Gugger <sylvain.gugger@gmail.com>
-
Ibraheem Moosa authored
Classification head of AlbertForMultipleChoice uses `hidden_dropout_prob` instead of `classifier_dropout_prob`. This is not desirable as we cannot change classifer head dropout probability without changing the dropout probabilities of the whole model.
-
- 11 Aug, 2021 3 commits
-
-
Lysandre Debut authored
* Install git * Add TF tests * And last TF test * Add in commented code too Co-authored-by:Sylvain Gugger <sylvain.gugger@gmail.com>
-
Gunjan Chhablani authored
* Initialize VisualBERT demo * Update demo * Add commented URL * Update README * Update README
-
Sylvain Gugger authored
* Fix doctests for quicktour * Adapt causal LM exemple * Remove space * Fix until summarization * End of task summary * Style * With last changes in quicktour
-
- 10 Aug, 2021 10 commits
-
-
Sylvain Gugger authored
-
Ibraheem Moosa authored
* Use original key for label in DataCollatorForTokenClassification DataCollatorForTokenClassification accepts either `label` or `labels` as key for label in it's input. However after padding the label it assigns the padded labels to key `labels`. If originally `label` was used as key than the original upadded labels still remains in the batch. Then at line 192 when we try to convert the batch elements to torch tensor than these original unpadded labels cannot be converted as the labels for different samples have different lengths. * Fixed style.
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Use test fetcher for push tests as well * Force diff with last commit for circleCI on master * Fix syntax error * Style * Schedule nightly tests
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Fix ModelOutput instantiation form dictionaries * Style
-