- 17 Oct, 2022 12 commits
-
-
Ankur Goyal authored
* Fixes * update expected values * style * fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
ANURAG BHANDARI authored
* added type hints for Yolos Pytorch model * make fixup * Update src/transformers/models/yolos/convert_yolos_to_pytorch.py * Update src/transformers/models/yolos/convert_yolos_to_pytorch.py * Update src/transformers/models/yolos/convert_yolos_to_pytorch.py Co-authored-by:
Matt <rocketknight1@gmail.com> Co-authored-by:
Matt <Rocketknight1@users.noreply.github.com>
-
Matt authored
* Partial TF port for ESM model * Add ESM-TF tests * Add the various imports for TF-ESM * TF weight conversion almost ready * Stop ignoring the decoder weights in PT * Add tests and lots of fixes * fix-copies * Fix imports, add model docs * Add get_vocab() to tokenizer * Fix vocab links for pretrained files * Allow multiple inputs with a sep * Use EOS as SEP token because ESM vocab lacks SEP * Correctly return special tokens mask from ESM tokenizer * make fixup * Stop testing unsupported embedding resizing * Handle TF bias correctly * Skip all models with slow tokenizers in the token classification test * Fixing the batch/unbatcher of pipelines to accomodate the `None` being passed around. * Fixing pipeline bug caused by slow tokenizer being different. * Update src/transformers/models/esm/modeling_tf_esm.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/models/esm/modeling_tf_esm.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/models/esm/modeling_tf_esm.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update set_input_embeddings and the copyright notices Co-authored-by:
Your Name <you@example.com> Co-authored-by:
Nicolas Patry <patry.nicolas@protonmail.com> Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com>
-
Ryan Chan authored
* add type hints to mctct * run auto style corrections * change torch.bool to bool# * Update src/transformers/models/mctct/modeling_mctct.py Co-authored-by:
Matt <Rocketknight1@users.noreply.github.com> * Remove optional tags for attention_mask and head_mask' * fix optional tags' * Update src/transformers/models/mctct/modeling_mctct.py Co-authored-by:
Matt <Rocketknight1@users.noreply.github.com>
-
Sivaudha authored
* Remove key word argument X from pipeline predict and transform methods As __call__ of pipeline clasees require one positional argument, passing the input as a keyword argument inside predict, transform methods, causing __call__ to fail. Hence in this commit the keyword argument is modified into positional argument. * Implement basic tests for scikitcompat pipeline interface * Seperate tests instead of running with parameterized based on framework as both frameworks will not be active at the same time
-
Ethan Joseph authored
-
Spacefish authored
-
Arthur authored
-
Thomas authored
* trocr Config for doctest * ran make style
-
AymenBer99 authored
-
AymenBer99 authored
-
Partho authored
* Data2Vec Text Config for doctest * typo fix * made suggested changes
-
- 15 Oct, 2022 1 commit
-
-
AymenBer99 authored
-
- 14 Oct, 2022 27 commits
-
-
Sylvain Gugger authored
-
Sujay authored
* initial changes * update the suggested order of import
-
Sujay authored
* adds vision_encoder_decoder to Doc tests * keep the initial order
-
Sujay authored
* initial commit * few suggested changes
-
Arthur authored
* simplify loop * fix layer map split * update * update for special variables * add rag test * fixup * revert change : for next PR
-
Arthur authored
* update feature extractor params * update attention mask handling * fix doc and pipeline test * add warning when skipping test * add whisper translation and transcription test * fix build doc test * Correct whisper processor * make fix copies * remove sample docstring as it does not fit whisper model * Update src/transformers/models/whisper/modeling_whisper.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix, doctests are passing * Nit * last nit Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Partho authored
* ResNet Config for doctest * added empty lines as suggested * ran make style
-
Sanchit Gandhi authored
* [Whisper] Fix gradient checkpointing (again!) * [Whisper] Fix checkpointing (again!)
-
Partho authored
-
Partho authored
-
Nicolas Patry authored
* Improve error messaging for ASR pipeline. - Raise error early (in `_sanitize`) so users don't waste time trying to run queries with invalid params. - Fix the error was after using `config.inputs_to_logits_ratio` so our check was masked by the failing property does not exist. - Added some manual check on s2t for the error message. No non ctc model seems to be used by the default runner (they are all skipped). * Removing pdb. * Stop the early error it doesn't really work :(.
-
0xflotus authored
* fix: small error * fix: another typo error
-
Jing Hua authored
Co-authored-by:Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Jing Hua authored
-
Jing Hua authored
-
RamitPahwa authored
* GPTTOkenizer dependency removed from deberta class Fixup made the Deberta Tokenizer fast independent of GPT-2 tokenizer Copied annotation added Done the dependency removal * Added some missing copied statement * Added some copied statements
-
Jing Hua authored
-
Yih-Dar authored
* fix flaubert tokenizer * update * update * Final cleanup Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Wang, Yi authored
Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com>
-
Pi Esposito authored
* add suport for non fast tf bert tokenizer * add tests for non fast tf bert tokenizer * fix fast bert tf tokenizer flag * double tokenizers list on tf tokenizers test to aovid breaking zip on test output equivalence * reformat code with black to comply with code quality checks * trigger ci
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Nouamane Tazi authored
* fix BLOOM ONNX config - `value` params have `seq_len` as their 2nd axe as opposed to other models which have it as 3rd Co-authored-by:lewtun <lewis.c.tunstall@gmail.com>
-
NielsRogge authored
* Add doc tests * Make it more consistent Co-authored-by:Niels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
-
Sanchit Gandhi authored
* [Whisper] Don't return attention mask in feat extractor * remove attention mask from test * fix failing tests * quality
-
amyeroberts authored
* Cast masks to np.unit8 before converting to PIL.Image.Image * Update tests * Fixup
-
Xabier Lahuerta Vazquez authored
* [Doctest] Add `configuration_bigbird_pegasus.py` and `configuration_big_bird` [Doctest] Re-style `configuration_big_bird.py` * [Doctest] One python instruction per line * [Doctest] Fix styling * [Doctest] More styling fixes
-