1. 13 Jun, 2023 2 commits
    • Matt's avatar
      Stop storing references to bound methods via tf.function (#24146) · 3bd1fe43
      Matt authored
      * Stop storing references to bound methods in tf.functions
      
      * Remove the gc.collect calls now that we resolved the underlying problem
      
      * Remove the default signature from model.serving entirely, big cleanup
      
      * Remove _prune_signature as self.input_signature can prune itself
      
      * Restore serving docstring
      
      * Update int support test to check the input signature
      
      * Make sure other tests also use model.input_signature and not serving.input_signature
      
      * Restore _prune_signature
      
      * Remove the doctest GC now it's no longer needed
      
      * Correct core tests to use the pruned sig
      
      * order lines correctly in core tests
      
      * Add eager_serving back with a deprecation warning
      3bd1fe43
    • Joao Gante's avatar
  2. 06 Jun, 2023 1 commit
    • Matt's avatar
      Move TF building to an actual build() method (#23760) · 4a55e478
      Matt authored
      * A fun new PR where I break the entire codebase again
      
      * A fun new PR where I break the entire codebase again
      
      * Handle cross-attention
      
      * Move calls to model(model.dummy_inputs) to the new build() method
      
      * Seeing what fails with the build context thing
      
      * make fix-copies
      
      * Let's see what fails with new build methods
      
      * Fix the pytorch crossload build calls
      
      * Fix the overridden build methods in vision_text_dual_encoder
      
      * Make sure all our build methods set self.built or call super().build(), which also sets it
      
      * make fix-copies
      
      * Remove finished TODO
      
      * Tentatively remove unneeded (?) line
      
      * Transpose b in deberta correctly and remove unused threading local
      
      * Get rid of build_with_dummies and all it stands for
      
      * Rollback some changes to TF-PT crossloading
      
      * Correctly call super().build()
      4a55e478
  3. 24 May, 2023 2 commits
    • Matt's avatar
      Overhaul TF serving signatures + dummy inputs (#23234) · 814de8fa
      Matt authored
      * Let's try autodetecting serving sigs
      
      * Don't clobber existing sigs
      
      * Change shapes for multiplechoice models
      
      * Make default dummy inputs smarter too
      
      * Fix missing f-string
      
      * Let's YOLO a serving output too
      
      * Read __class__.__name__ properly
      
      * Don't just pass naked lists in there and expect it to be okay
      
      * Code cleanup
      
      * Update default serving sig
      
      * Clearer error messages
      
      * Further updates to the default serving output
      
      * make fixup
      
      * Update the serving output a bit more
      
      * Cleanups and renames, raise errors appropriately when we can't infer inputs
      
      * More renames
      
      * we're building in a functional context again, yolo
      
      * import DUMMY_INPUTS from the right place
      
      * import DUMMY_INPUTS from the right place
      
      * Support cross-attention in the dummies
      
      * Support cross-attention in the dummies
      
      * Complete removal of dummy/serving overrides in BERT
      
      * Complete removal of dummy/serving overrides in RoBERTa
      
      * Obliterate lots and lots of serving sig and dummy overrides
      
      * merge type hint changes
      
      * Fix for token_type_ids with vocab_size 1
      
      * Add missing property decorator
      
      * Fix T5 and hopefully some models that take conv inputs
      
      * More signature pruning
      
      * Fix T5's signature
      
      * Fix Wav2Vec2 signature
      
      * Fix LongformerForMultipleChoice input signature
      
      * Fix BLIP and LED
      
      * Better default serving output error handling
      
      * Fix BART dummies
      
      * Fix dummies for cross-attention, esp encoder-decoder models
      
      * Fix visionencoderdecoder signature
      
      * Fix BLIP serving output
      
      * Small tweak to BART dummies
      
      * Cleanup the ugly parameter inspection line that I used in a few places
      
      * committed a breakpoint again
      
      * Move the text_dims check
      
      * Remove blip_text serving_output
      
      * Add decoder_input_ids to the default input sig
      
      * Remove all the manual overrides for encoder-decoder model signatures
      
      * Tweak longformer/led input sigs
      
      * Tweak default serving output
      
      * output.keys() -> output
      
      * make fixup
      814de8fa
    • Matt's avatar
      Better TF docstring types (#23477) · f8b25744
      Matt authored
      * Rework TF type hints to use | None instead of Optional[] for tf.Tensor
      
      * Rework TF type hints to use | None instead of Optional[] for tf.Tensor
      
      * Don't forget the imports
      
      * Add the imports to tests too
      
      * make fixup
      
      * Refactor tests that depended on get_type_hints
      
      * Better test refactor
      
      * Fix an old hidden bug in the test_keras_fit input creation code
      
      * Fix for the Deit tests
      f8b25744
  4. 28 Apr, 2023 1 commit
  5. 24 Apr, 2023 1 commit
  6. 04 Apr, 2023 2 commits
  7. 09 Mar, 2023 1 commit
  8. 07 Mar, 2023 1 commit
  9. 28 Feb, 2023 1 commit
    • Matt's avatar
      Improve TF weight loading, especially PT crossloading (#21792) · acfb714b
      Matt authored
      * First commit for the improved PT-TF weight loading
      
      * Remove workarounds from TFEncoderDecoder tests
      
      * Allow a custom weight renaming function in from_pretrained and use that to clean up EncoderDecoder
      
      * make fixup
      
      * First attempt at visionencoderdecoder
      
      * Disable tensorfloat32 in tests to get consistent outputs
      
      * Quick fix to tf_vision_encoder_decoder tests
      
      * make fixup
      
      * Update Blenderbot tests
      
      * Remove unused arg in modeling_tf_opt
      
      * load_tf_sharded_weights had strict=True! This meant transfer learning was impossible, so I'm setting it to False.
      
      * Support prefixes when loading sharded TF checkpoints
      
      * make fixup
      
      * Add test to load sharded models with a weight prefix
      
      * Fix sharded weight loading test
      
      * Add a test for transfer from a sharded checkpoint
      
      * make fixup
      
      * Add test to check that crossloading from PT with a prefix works
      
      * Refactor from_pretrained in the encoderdecoder classes
      
      * Refactor from_pretrained in the encoderdecoder classes
      
      * missmatched -> mismatched
      
      * Explicitly check for None
      
      * No comments showing my very impressive and attractive knowledge of Py3.9+
      
      * Disable TF32 across all TF tests
      acfb714b
  10. 22 Feb, 2023 1 commit
  11. 06 Feb, 2023 1 commit
    • Sylvain Gugger's avatar
      Update quality tooling for formatting (#21480) · 6f79d264
      Sylvain Gugger authored
      * Result of black 23.1
      
      * Update target to Python 3.7
      
      * Switch flake8 to ruff
      
      * Configure isort
      
      * Configure isort
      
      * Apply isort with line limit
      
      * Put the right black version
      
      * adapt black in check copies
      
      * Fix copies
      6f79d264
  12. 31 Jan, 2023 1 commit
  13. 23 Jan, 2023 1 commit
  14. 18 Jan, 2023 1 commit
  15. 04 Jan, 2023 1 commit
  16. 14 Dec, 2022 1 commit
  17. 05 Dec, 2022 1 commit
  18. 28 Nov, 2022 1 commit
    • Matt's avatar
      More TF int dtype fixes (#20384) · de4159a3
      Matt authored
      * Add a test to ensure int dummy inputs are int64
      
      * Move the test into the existing int64 test and update a lot of existing dummies
      
      * Fix remaining dummies
      
      * Fix remaining dummies
      
      * Test for int64 serving sigs as well
      
      * Update core tests to use tf.int64
      
      * Add better messages to the assertions
      
      * Update all serving sigs to int64
      
      * More sneaky hiding tf.int32s
      
      * Add an optional int32 signature in save_pretrained
      
      * make fixup
      
      * Add Amy's suggestions
      
      * Switch all serving sigs back to tf.int32
      
      * Switch all dummies to tf.int32
      
      * Adjust tests to check for tf.int32 instead of tf.int64
      
      * Fix base dummy_inputs dtype
      
      * Start casting to tf.int32 in input_processing
      
      * Change dtype for unpack_inputs test
      
      * Add proper tf.int32 test
      
      * Make the alternate serving signature int64
      de4159a3
  19. 22 Nov, 2022 1 commit
  20. 17 Nov, 2022 2 commits
  21. 15 Nov, 2022 1 commit
    • Matt's avatar
      Slightly alter Keras dummy loss (#20232) · 26ec7928
      Matt authored
      * Slightly alter Keras dummy loss
      
      * Slightly alter Keras dummy loss
      
      * Add sample weight to test_keras_fit
      
      * Fix test_keras_fit for datasets
      
      * Skip the sample_weight stuff for models where the model tester has no batch_size
      26ec7928
  22. 09 Nov, 2022 1 commit
  23. 07 Nov, 2022 1 commit
  24. 27 Oct, 2022 1 commit
  25. 18 Oct, 2022 1 commit
    • David Yang's avatar
      Clean up deprecation warnings (#19654) · a23819ed
      David Yang authored
      * Clean up deprecation warnings
      
      Notes:
      Changed some strings in tests to raw strings, which will change the literal content of the strings as they are fed into whatever machine handles them.
      Test cases for past in the past/past_key_values switch changed/removed due to warning of impending removal
      
      * Add PILImageResampling abstraction for PIL.Image.Resampling
      a23819ed
  26. 14 Oct, 2022 1 commit
  27. 11 Oct, 2022 1 commit
  28. 10 Oct, 2022 1 commit
    • amyeroberts's avatar
      Add TF whisper (#19378) · e3f028f3
      amyeroberts authored
      
      
      * simplify loop
      
      * add featur extractor
      
      * add model
      
      * start conversion
      
      * add dropout
      
      * initial commit of test files
      
      * copnversion for all models
      
      * update processor for correct padding
      
      * update feature extraction
      
      * update integration test logits match
      
      * fmnt: off for the logits
      
      * on the fly mel bank
      
      * small nit
      
      * update test
      
      * update tokenizer
      
      * nit feature extraction
      
      * update
      
      * update tokenizer test
      
      * adds logit processor and update tokenizer to get supress tokens
      
      * style
      
      * clean convert
      
      * revert to original modeling tf utils
      
      * Update
      
      * update
      
      * nit
      
      * clean convert file
      
      * update tests and nits
      
      * quality
      
      * slow generation test
      
      * ffn_dim to allow customization
      
      * update readme
      
      * add to toctreee
      
      * start fixing integration tests
      
      * update tests and code
      
      * fix feature extractor
      
      * fix config tests common
      
      * update code to fix tests
      
      * fix feature exctractor
      
      * nit feature extraction
      
      * update test for new feature extractor
      
      * style
      
      * add absrtact
      
      * large logits wioth custom decoder input ids
      
      * wraap around is otrch available
      
      * fix feature extractor
      
      * correct logits for whisper small.en
      
      * nit
      
      * fix encoder_attentino_mask
      
      * some fixes
      
      * remove unnecessary inputs
      
      * nits
      
      * add normalizer file
      
      * update etst tokenization
      
      * fix attention mask not defined
      
      * fix generate
      
      * remove uncoder attention mask useless
      
      * update test modeling whisper
      
      * update condfig to add second non supress tokens
      
      * nits on feature exrtactor
      
      * nit for test tokenizers
      
      * update etsts
      
      * update tests
      
      * update tokenization test
      
      * fixup
      
      * invalidated hf token. Clean convert openai to whisper
      
      * fix logit tests
      
      * fixup
      
      * Add model to README
      
      * Fix doc tests
      
      * clean merge
      
      * revert toc_tree changes
      
      * remove useless LogitProcessor
      
      * Update whisper .mdx
      
      * update config file doc
      
      * update configuration docstring
      
      * update test tokenization
      
      * update test tokenization
      
      * update tokenization whisper
      Added copied from where needed
      
      * update feature extraction
      
      * nit test name
      
      * style
      
      * quality
      
      * remove get suppress tokens and update non_speech tokens global variables
      
      * Update src/transformers/models/whisper/feature_extraction_whisper.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * clean modeling whisper and test
      Removed the attention mask arguments that are deprecated
      
      * fix large test
      
      * Add multilingual audio test, and translate test
      
      * style
      
      * fix larg multilingual test
      
      * nits
      
      * add copied from for attention layer
      
      * remove attention masks in doc
      
      * add english normalizer
      
      * Update docs/source/en/model_doc/whisper.mdx
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * update tokenization test
      
      * remove copied from in whisper attention : no bias in k_proj only
      
      * wrap around dependencies in english normalizer
      
      * style
      
      * correct import generation logits
      
      * for now, wrap feature extractor with torch
      
      * remove torch depencies for feature extraction and style
      
      * Update src/transformers/models/whisper/convert_openai_whisper_to_tfms.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/whisper.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * fixup
      
      * nit
      
      * update logitds
      
      * style
      
      * nit
      
      * nits and fix final tests
      
      * add `is_more_itertools_available` to utils
      
      * quality
      
      * add begin supress tokens, supress tokens to generate args and config
      
      * clean supressTokensLogitProcessor in generation logits
      
      * Nit naming
      
      * add supressTokensAtBegin
      
      * udpate tests, supress tokens to None or correct values
      
      * nit and style
      
      * update RAG to fit test and generate_logit
      
      * add copy pasted statment on english normalizer
      
      * add arguments to config_common_kwargs
      
      * Update src/transformers/generation_utils.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/generation_logits_process.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * revert changes based on reviews
      
      * update doc and nits
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * more nits
      
      * last nits
      
      * update test configuration common
      
      * add BART name in decoder attention mask documentation
      
      * Update src/transformers/models/whisper/modeling_whisper.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * style
      
      * nit
      
      * nit
      
      * add english.json file to git
      
      * nits on documentation
      
      * nit
      
      * nits
      
      * last styling
      
      * add main toctree file
      
      * remove sentence piece dependency
      
      * clean init file
      
      * fix tokenizer that has no dependencies on sentencepiece
      
      * update whisper init file, nit
      
      * remove english.json file
      
      * add get decoder prompt id
      
      * All weights loading
      
      * Remove hanging pdb
      
      * Fixup and tidy up
      
      * Use same copied from as PT model
      
      * Remove whitespace changes
      
      * Remove torch references
      
      * Tie embeddings
      
      * Remove logits processor input to generate
      
      * Update logit values
      
      * revert changes and add forced logit processor
      
      * nit
      
      * clean normalizer
      
      * remove protected
      
      * Add logit processors and update generation code & tests
      
      * Some tidy up
      
      * Update docstring
      
      * update
      
      * update based on review
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/whisper/configuration_whisper.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update to reflect changes on the PT model branch
      
      * Tidy up
      
      * Remove extra whitespace
      
      * Fix test - make input ids small enough we can append
      
      * Include upstream changes on main
      
      * PR comments - add batch tests, remove comments & defaults
      
      * Fix model output imports
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/generation_tf_logits_process.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update tests/models/whisper/test_modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update docstring example
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Remove changes to adjust_logits_during_generation function
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Tidy up imports that don't require TF
      
      * Update tests - skip and no more skip
      
      * Update tests/generation/test_generation_tf_logits_process.py
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      
      * Update src/transformers/models/whisper/modeling_tf_whisper.py
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * Add training flags
      
      * Add (skipped) XLA generation tests
      
      * Add embedding correctness test
      
      * Add constant ids for generation tests
      
      * Make logits finding a bit tidier
      
      * Remove unused args
      
      * xla generation enabled
      
      * Don't skip XLA tests anymore
      
      * Fix tests - add position ids to expected signature and update rag generation
      
      * Undo method reorder
      
      * Remove added whitespace
      
      * Remove copy-paste gradient checkopint ref
      
      * Remove
      
      * Trigger CI - (issue with refs when pulling)
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarNielsRogge <niels.rogge1@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      Co-authored-by: default avatarJoao Gante <joao@huggingface.co>
      e3f028f3
  29. 29 Sep, 2022 1 commit
    • Aritra Roy Gosthipaty's avatar
      [TensorFlow] Adding GroupViT (#18020) · 0dc7b3a7
      Aritra Roy Gosthipaty authored
      
      
      * chore: initial commit
      
      * chore: adding util methods
      
      yet to work on the nn.functional.interpolate port with align_corener=True
      
      * chore: refactor the utils
      
      * used tf.compat.v1.image.resize to align the F.interpolate function
      * added type hints to the method signatures
      * added references to the gists where one 2 one alignment of torch and tf has been shown
      
      * chore: adding the layers
      
      * chore: porting all the layers from torch to tf
      
      This is the initial draft, nothing is tested yet.
      
      * chore: aligning the layers with reference to tf clip
      
      * chore: aligning the modules
      
      * added demaraction comments
      * added copied and adapted from comments
      
      * chore: aligning with CLIP
      
      * chore: wrangling the layers to keep it tf compatible
      
      * chore: aligning the names of the layers for porting
      
      * chore: style changes
      
      * chore: adding docs and inits
      
      * chore: adding tfp dependencis
      
      the code is taken from TAPAS
      
      * chore: initial commit for testing
      
      * chore: aligning the vision embeddings with the vit implementatino
      
      * chore: changing model prefix
      
      * chore: fixing the name of the model and the layer normalization test case
      
      * chore: every test passes but the slow ones
      
      * chore: fix style and integration test
      
      * chore: moving comments below decorators
      
      * chore: make fixup and fix-copies changes
      
      * chore: adding the Vision and Text Model to check_repo
      
      * chore: modifying the prefix name to align it with the torch implementation
      
      * chore: fix typo in configuration
      
      * choer: changing the name of the model variable
      
      * chore: adding segmentation flag
      
      * chore: gante's review
      
      * chore: style refactor
      
      * chore: amy review
      
      * chore: adding shape_list to parts that have been copied from other snippets
      
      * chore: init batchnorm with torch defaults
      
      * chore: adding shape_list to pass the tests
      
      * test fix: adding seed as 0
      
      * set seed
      
      * chore: changing the straight through trick to fix -ve dimensinos
      
      * chore: adding a dimension to the loss
      
      * chore: adding reviewers and contributors names to the docs
      
      * chore: added changes after review
      
      * chore: code quality fixup
      
      * chore: fixing the segmentation snippet
      
      * chore: adding  to the layer calls
      
      * chore: changing int32 to int64 for inputs of serving
      
      * chore: review changes
      
      * chore: style changes
      
      * chore: remove from_pt=True
      
      * fix: repo consistency
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      0dc7b3a7
  30. 16 Sep, 2022 2 commits
  31. 15 Sep, 2022 1 commit
    • Matt's avatar
      Update serving signatures and make sure we actually use them (#19034) · 2322eb8e
      Matt authored
      * Override save() to use the serving signature as the default
      
      * Replace int32 with int64 in all our serving signatures
      
      * Remember one very important line so as not to break every test at once
      
      * Dtype fix for TFLED
      
      * dtype fix for shift_tokens_right in general
      
      * Dtype fixes in mBART and RAG
      
      * Fix dtypes for test_unpack_inputs
      
      * More dtype fixes
      
      * Yet more mBART + RAG dtype fixes
      
      * Yet more mBART + RAG dtype fixes
      
      * Add a check that the model actually has a serving method
      2322eb8e
  32. 10 Sep, 2022 1 commit
  33. 09 Sep, 2022 1 commit
    • Matt's avatar
      Fix train_step, test_step and tests for CLIP (#18684) · 660e0b97
      Matt authored
      
      
      * Fix train_step and test_step, correctly enable CLIP fit test
      
      * Stop using get_args on older Python versions
      
      * Don't use get_origin either
      
      * UnionType is actually even newer, don't use that either
      
      * Apply the same fix to test_loss_computation
      
      * Just realized I was accidentally skipping a bunch of tests!
      
      * Fix test_loss_computation for models without separable labels
      
      * Fix scalar losses in test_step and train_step
      
      * Stop committing your breakpoints
      
      * Fix Swin loss shape
      
      * Fix Tapas loss shape
      
      * Shape fixes for TAPAS, DeIT, HuBERT and ViTMAE
      
      * Add loss computation to TFMobileBertForPreTraining
      
      * make fixup and move copied from statement
      
      * make fixup and move copied from statement
      
      * Correct copied from
      
      * Add labels and next_sentence_label inputs to TFMobileBERT
      
      * Make sure total_loss is always defined
      
      * Update tests/test_modeling_tf_common.py
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Fix copied from
      
      * Ensure CTC models get labels in tests
      
      * Ensure CTC models get labels in tests
      
      * Fix tests for vit_mae
      
      * Fix tests for vit_mae
      
      * Fix tests for vit_mae
      
      * Reduce batch size for wav2vec2 testing because it was causing OOM
      
      * Skip some TAPAS tests that are failing
      
      * Skip a failing HuBERT test
      
      * make style
      
      * Fix mobilebertforpretraining test
      
      * Skip Wav2Vec2 tests that use huge amounts of mem
      
      * Skip keras_fit for Wav2Vec2 as well
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      660e0b97
  34. 07 Sep, 2022 1 commit
    • Ankur Goyal's avatar
      Add DocumentQuestionAnswering pipeline (#18414) · 2ef77421
      Ankur Goyal authored
      
      
      * [WIP] Skeleton of VisualQuestionAnweringPipeline extended to support LayoutLM-like models
      
      * Fixup
      
      * Use the full encoding
      
      * Basic refactoring to DocumentQuestionAnsweringPipeline
      
      * Cleanup
      
      * Improve args, docs, and implement preprocessing
      
      * Integrate OCR
      
      * Refactor question_answering pipeline
      
      * Use refactored QA code in the document qa pipeline
      
      * Fix tests
      
      * Some small cleanups
      
      * Use a string type annotation for Image.Image
      
      * Update encoding with image features
      
      * Wire through the basic docs
      
      * Handle invalid response
      
      * Handle empty word_boxes properly
      
      * Docstring fix
      
      * Integrate Donut model
      
      * Fixup
      
      * Incorporate comments
      
      * Address comments
      
      * Initial incorporation of tests
      
      * Address Comments
      
      * Change assert to ValueError
      
      * Comments
      
      * Wrap `score` in float to make it JSON serializable
      
      * Incorporate AutoModeLForDocumentQuestionAnswering changes
      
      * Fixup
      
      * Rename postprocess function
      
      * Fix auto import
      
      * Applying comments
      
      * Improve docs
      
      * Remove extra assets and add copyright
      
      * Address comments
      Co-authored-by: default avatarAnkur Goyal <ankur@impira.com>
      2ef77421
  35. 02 Sep, 2022 1 commit