- 03 Nov, 2021 9 commits
-
-
Sylvain Gugger authored
* Pin Keras cause they messed their release * Put != instead of < * Try this way * Back to the beginning but more agressive
-
Nicolas Patry authored
-
Dan Shirron authored
* Fix of issue #13327: Wrong weight initialization for TF t5 model * run black formatter * fix typo * remove my name tag from comments Co-authored-by:Shirron <dan.shirron@intel.com>
-
Nicolas Patry authored
* Adding support for `truncation` parameter on `feature-extraction` pipeline. Fixes #14183 * Fixing tests on ibert, longformer, and roberta. * Rebase fix.
-
Dean Wyatte authored
minimal fixes to run DataCollatorForWholeWordMask with return_tensors="np" and return_tensors="tf" (#13891) * minimal fixes to run DataCollatorForWholeWordMask with return_tensors="np" and return_tensors="tf" * more consinstent implementation for numpy_mask_tokens
-
Mishig Davaadorj authored
* Fix img load rotation * Add `load_image` to `image_utils.py` * Implement LoadImageTester * Use hf-internal-testing dataset * Add img utils comments * Refactor LoadImageTester * Import load_image under is_vision_available
-
Patrick von Platen authored
-
Yih-Dar authored
* Add cross attentions to TFGPT2Model * change to is_pt_tf_cross_test * A minor correction to a comment * Remove n_ctx when creating self.crossattention Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
NielsRogge authored
* Add LayoutXLMTokenizer and LayoutXLMTokenizerFast * Fix styling issues * Fix more styling issues * Fix more styling issues * Fix docstring * Fix unit tests * Fix docs * Fix unit tests * Fix typos and styling issues * Fix styling issues * Fix docstring * Make all tests of test_tokenization_layoutxlm pass * Add LayoutXLMProcessor * Make fixup * Make all LayoutXLMProcessor tests pass * Minor fixes * Leave LayoutLMv2Processor tests unchanged * Fix code quality * Move LayoutXLM tokenizers and processor to separate folder * Fix code quality * Apply suggestions from code review * Replace assertions by value errors * Remove methods from fast tokenizer Co-authored-by:King Yiu Suen <kingyiusuen@gmail.com>
-
- 02 Nov, 2021 7 commits
-
-
Sylvain Gugger authored
* Update Transformers to huggingface_hub >= 0.1.0 * Forgot to save... * Style * Fix test
-
lumliolum authored
* add Beit model ouput class * inherting from BaseModelOuputWithPooling * updated docs if use_mean_pooling is False * added beit specific outputs in model docs * changed the import path * Fix docs Co-authored-by:Niels Rogge <niels.rogge1@gmail.com>
-
Sylvain Gugger authored
-
Sylvain Gugger authored
-
Anton Lozhkov authored
* Add audio-classification benchmarking results * fix distilhubert path
-
Yih-Dar authored
* check test_configuration_tie * Fix test_configuration_tie * make test slow again * Remove property and use model.module.bind * revert to slow test Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Li-Huai (Allan) Lin authored
* Fix generation docstring * Style
-
- 01 Nov, 2021 9 commits
-
-
NielsRogge authored
* Add first draft * Make forward pass work * Improve conversion script * Add notebook that checks if it works * Add BeitForSemanticSegmentation to the tests * More improvements * Make BeitForSemanticSegmentation consistent with Segformer * Small bug fix * Add BeitForSemanticSegmentation to docs * Make sure model doesn't output hidden states when the user doesn't want to * Make it possible to convert the large model * Fix issue * Fix conversion script for large model * Add auxiliary_head option to semantic segmentation model * Apply suggestions from @sgugger's review * Apply suggestions from code review * Fix failing test Co-authored-by:Lysandre <lysandre.debut@reseau.eseo.fr>
-
Walter Martin authored
Signed-off-by:Walter Martin <wamartin@microsoft.com>
-
Suraj Patil authored
* enable common tests, small fixes * don't tie word embeds * don't ignore lm_head
-
mathor authored
-
Prabhudatta Das authored
* raising exceptions instead of using assertions for few models * fixed formatting issues * fixing copy inconsistencies
-
Nicolas Patry authored
in `base.py` not in subclasses.
-
Nicolas Patry authored
-
NielsRogge authored
-
Yih-Dar authored
* Add missing models to models/__init__.py * Fix issues previously undetected * Add UniSpeechSatForPreTraining to all_model_classes * fix unispeech sat * fix * Add check_model_list() to check_repo.py * Remove _ignore_models = ["bort"] Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com> Co-authored-by:
patrickvonplaten <patrick.v.platen@gmail.com>
-
- 29 Oct, 2021 11 commits
-
-
Lysandre authored
-
Lysandre authored
-
Lysandre Debut authored
* Torch 1.10 * torch scatter for 1.10 * style * Skip tests ok
-
Haram Lee authored
-
Nicolas Patry authored
* Fixing image segmentation for inference mode. * Update src/transformers/pipelines/base.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Sylvain Gugger authored
* Generalize problem_type to all classification models * Missing import * Deberta BC and fix tests * Fix template * Missing imports * Revert change to reformer test * Fix style
-
Sylvain Gugger authored
* Fix pipeline tests env and fetch * Fix quality
-
Nicolas Patry authored
* Adding `handle_long_generation` paramters for `text-generation` pipeline. * More error handling * Fixing tests by dropping tf support on this functionality, it needs `max_new_tokens` to make it possible to understand user's intent. Otherwise, `max_length` == `tokenizer.model_max_length` < input_ids.shape[0]. * Fixing doc ? * Doc ? * Remove link from doc. * Catched an issue on roberta. * Damn doc. * Non BC proposal ? * Cleaning the fix ? * Finally using only a test override. * Don't need to modify this. * Bad print.
-
Daniel Stancl authored
* Add the support for the fast (rust) implementation of BlenbderbotTokenizer * Fix a converter and a typo in a doc * Apply the patil-suraj's suggestion * (Nitpick) Fast tokenization -> Fast Tokenization in doc * Apply the SaulLu's suggestion * Apply Narsil's suggestion to fix test pipelines * Add encoder_no_repeat_ngram_size according to the Narsil's suggestion * Revert the last (unnecessary) commit * Override pipeline config for Blenderbot to allow for larger pos. emb. * make fix-copies
-
Thomas Wang authored
* Remove n_ctx from configs * Fix GPTJ and OpenAIGPT, both are acceptable breaking changes as there are no configs such that it breaks * Remove unecessary n_positions from TFOpenAIGPT
-
Nicolas Patry authored
* Tentative enabling of `batch_size` for pipelines. * Add systematic test for pipeline batching. * Enabling batch_size on almost all pipelines - Not `zero-shot` (it's already passing stuff as batched so trickier) - Not `QA` (preprocess uses squad features, we need to switch to real tensors at this boundary. * Adding `min_length_for_response` for conversational. * Making CTC, speech mappings avaiable regardless of framework. * Attempt at fixing automatic tests (ffmpeg not enabled for fast tests) * Removing ffmpeg dependency in tests. * Small fixes. * Slight cleanup. * Adding docs and adressing comments. * Quality. * Update docs/source/main_classes/pipelines.rst Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/pipelines/question_answering.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/pipelines/zero_shot_classification.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Improving docs. * Update docs/source/main_classes/pipelines.rst Co-authored-by:
Philipp Schmid <32632186+philschmid@users.noreply.github.com> * N -> oberved_batch_size softmax trick. * Follow `padding_side`. * Supporting image pipeline batching (and padding). * Rename `unbatch` -> `loader_batch`. * unbatch_size forgot. * Custom padding for offset mappings. * Attempt to remove librosa. * Adding require_audio. * torchaudio. * Back to using datasets librosa. * Adding help to set a pad_token on the tokenizer. * Update src/transformers/pipelines/base.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/pipelines/base.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/pipelines/base.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Quality. Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Philipp Schmid <32632186+philschmid@users.noreply.github.com>
-
- 28 Oct, 2021 4 commits
-
-
David del R铆o Medina authored
-
Patrick von Platen authored
-
Lysandre authored
-
Lysandre authored
-