1. 09 Sep, 2022 2 commits
  2. 08 Sep, 2022 1 commit
    • NielsRogge's avatar
      Add X-CLIP (#18852) · bb6f6d53
      NielsRogge authored
      * First draft
      
      * Improve conversion script
      
      * Make vision encoder work
      
      * More improvements
      
      * Improve conversion script
      
      * Fix quality
      
      * Add MultiframeIntegrationTransformer
      
      * More improvements
      
      * Make MiT output work
      
      * Fix quality
      
      * Add prompts generator
      
      * Add tests
      
      * Fix some tests
      
      * Fix some more tests
      
      * Fix more tests
      
      * Improve conversion script
      
      * Fix model outputs
      
      * Fix more tests
      
      * Add XClipProcessor
      
      * Use processor in conversion script
      
      * Fix integration test
      
      * Update README, fix docs
      
      * Fix all tests
      
      * Add MIT output to XClipOutput
      
      * Create better variable names
      
      * Rename XClip to XCLIP
      
      * Extend conversion script
      
      * Add support for large models
      
      * Add support for 16 frame models
      
      * Add another model'
      
      * Fix module issue
      
      * Apply suggestions from code review
      
      * Add figure to docs
      
      * Fix CLIPProcessor issue
      
      * Apply suggestions from code review
      
      * Delete file
      
      * Convert more checkpoints
      
      * Convert last checkpoint
      
      * Update nielsr to microsoft
      bb6f6d53
  3. 07 Sep, 2022 1 commit
    • Ankur Goyal's avatar
      Add DocumentQuestionAnswering pipeline (#18414) · 2ef77421
      Ankur Goyal authored
      
      
      * [WIP] Skeleton of VisualQuestionAnweringPipeline extended to support LayoutLM-like models
      
      * Fixup
      
      * Use the full encoding
      
      * Basic refactoring to DocumentQuestionAnsweringPipeline
      
      * Cleanup
      
      * Improve args, docs, and implement preprocessing
      
      * Integrate OCR
      
      * Refactor question_answering pipeline
      
      * Use refactored QA code in the document qa pipeline
      
      * Fix tests
      
      * Some small cleanups
      
      * Use a string type annotation for Image.Image
      
      * Update encoding with image features
      
      * Wire through the basic docs
      
      * Handle invalid response
      
      * Handle empty word_boxes properly
      
      * Docstring fix
      
      * Integrate Donut model
      
      * Fixup
      
      * Incorporate comments
      
      * Address comments
      
      * Initial incorporation of tests
      
      * Address Comments
      
      * Change assert to ValueError
      
      * Comments
      
      * Wrap `score` in float to make it JSON serializable
      
      * Incorporate AutoModeLForDocumentQuestionAnswering changes
      
      * Fixup
      
      * Rename postprocess function
      
      * Fix auto import
      
      * Applying comments
      
      * Improve docs
      
      * Remove extra assets and add copyright
      
      * Address comments
      Co-authored-by: default avatarAnkur Goyal <ankur@impira.com>
      2ef77421
  4. 02 Sep, 2022 2 commits
  5. 01 Sep, 2022 2 commits
  6. 31 Aug, 2022 1 commit
    • Ankur Goyal's avatar
      Add LayoutLMForQuestionAnswering model (#18407) · 5c4c8690
      Ankur Goyal authored
      
      
      * Add LayoutLMForQuestionAnswering model
      
      * Fix output
      
      * Remove TF TODOs
      
      * Add test cases
      
      * Add docs
      
      * TF implementation
      
      * Fix PT/TF equivalence
      
      * Fix loss
      
      * make fixup
      
      * Fix up documentation code examples
      
      * Fix up documentation examples + test them
      
      * Remove LayoutLMForQuestionAnswering from the auto mapping
      
      * Docstrings
      
      * Add better docstrings
      
      * Undo whitespace changes
      
      * Update tokenizers in comments
      
      * Fixup code and remove `from_pt=True`
      
      * Fix tests
      
      * Revert some unexpected docstring changes
      
      * Fix tests by overriding _prepare_for_class
      Co-authored-by: default avatarAnkur Goyal <ankur@impira.com>
      5c4c8690
  7. 30 Aug, 2022 1 commit
  8. 24 Aug, 2022 1 commit
  9. 12 Aug, 2022 2 commits
    • NielsRogge's avatar
      Add Donut (#18488) · 2ab790e8
      NielsRogge authored
      
      
      * First draft
      
      * Improve script
      
      * Update script
      
      * Make conversion work
      
      * Add final_layer_norm attribute to Swin's config
      
      * Add DonutProcessor
      
      * Convert more models
      
      * Improve feature extractor and convert base models
      
      * Fix bug
      
      * Improve integration tests
      
      * Improve integration tests and add model to README
      
      * Add doc test
      
      * Add feature extractor to docs
      
      * Fix integration tests
      
      * Remove register_buffer
      
      * Fix toctree and add missing attribute
      
      * Add DonutSwin
      
      * Make conversion script work
      
      * Improve conversion script
      
      * Address comment
      
      * Fix bug
      
      * Fix another bug
      
      * Remove deprecated method from docs
      
      * Make Swin and Swinv2 untouched
      
      * Fix code examples
      
      * Fix processor
      
      * Update model_type to donut-swin
      
      * Add feature extractor tests, add token2json method, improve feature extractor
      
      * Fix failing tests, remove integration test
      
      * Add do_thumbnail for consistency
      
      * Improve code examples
      
      * Add code example for document parsing
      
      * Add DonutSwin to MODEL_NAMES_MAPPING
      
      * Add model to appropriate place in toctree
      
      * Update namespace to appropriate organization
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      2ab790e8
    • Yih-Dar's avatar
  10. 10 Aug, 2022 1 commit
    • Younes Belkada's avatar
      `bitsandbytes` - `Linear8bitLt` integration into `transformers` models (#17901) · 4a51075a
      Younes Belkada authored
      
      
      * first commit
      
      * correct replace function
      
      * add final changes
      
      - works like charm!
      - cannot implement tests yet
      - tested
      
      * clean up a bit
      
      * add bitsandbytes dependencies
      
      * working version
      
      - added import function
      - added bitsandbytes utils file
      
      * small fix
      
      * small fix
      
      - fix import issue
      
      * fix import issues
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * refactor a bit
      
      - move bitsandbytes utils to utils
      - change comments on functions
      
      * reformat docstring
      
      - reformat docstring on init_empty_weights_8bit
      
      * Update src/transformers/__init__.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * revert bad formatting
      
      * change to bitsandbytes
      
      * refactor a bit
      
      - remove init8bit since it is useless
      
      * more refactoring
      
      - fixed init empty weights issue
      - added threshold param
      
      * small hack to make it work
      
      * Update src/transformers/modeling_utils.py
      
      * Update src/transformers/modeling_utils.py
      
      * revmoe the small hack
      
      * modify utils file
      
      * make style + refactor a bit
      
      * create correctly device map
      
      * add correct dtype for device map creation
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * apply suggestions
      
      - remove with torch.grad
      - do not rely on Python bool magic!
      
      * add docstring
      
       - add docstring for new kwargs
      
      * add docstring
      
      - comment `replace_8bit_linear` function
      - fix weird formatting
      
      * - added more documentation
      - added new utility function for memory footprint tracking
      - colab demo to add
      
      * few modifs
      
      - typo doc
      - force cast into float16 when load_in_8bit is enabled
      
      * added colab link
      
      * add test architecture + docstring a bit
      
      * refactor a bit testing class
      
      * make style + refactor a bit
      
      * enhance checks
      
      - add more checks
      - start writing saving test
      
      * clean up a bit
      
      * male style
      
      * add more details on doc
      
      * add more tests
      
      - still needs to fix 2 tests
      
      * replace by "or"
      
      - could not fix it from GitHub GUI
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * refactor a bit testing code + add readme
      
      * make style
      
      * fix import issue
      
      * Update src/transformers/modeling_utils.py
      Co-authored-by: default avatarMichael Benayoun <mickbenayoun@gmail.com>
      
      * add few comments
      
      * add more doctring + make style
      
      * more docstring
      
      * raise error when loaded in 8bit
      
      * make style
      
      * add warning if loaded on CPU
      
      * add small sanity check
      
      * fix small comment
      
      * add bitsandbytes on dockerfile
      
      * Improve documentation
      
      - improve documentation from comments
      
      * add few comments
      
      * slow tests pass on the VM but not on the CI VM
      
      * Fix merge conflict
      
      * make style
      
      * another test should pass on a multi gpu setup
      
      * fix bad import in testing file
      
      * Fix slow tests
      
      - remove dummy batches
      - no more CUDA illegal memory errors
      
      * odify dockerfile
      
      * Update docs/source/en/main_classes/model.mdx
      
      * Update Dockerfile
      
      * Update model.mdx
      
      * Update Dockerfile
      
      * Apply suggestions from code review
      
      * few modifications
      
      - lm head can stay on disk/cpu
      - change model name so that test pass
      
      * change test value
      
      - change test value to the correct output
      - torch bmm changed to baddmm in bloom modeling when merging
      
      * modify installation guidelines
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * replace `n`by `name`
      
      * merge `load_in_8bit` and `low_cpu_mem_usage`
      
      * first try - keep the lm head in full precision
      
      * better check
      
      - check the attribute `base_model_prefix` instead of computing the number of parameters
      
      * added more tests
      
      * Update src/transformers/utils/bitsandbytes.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Merge branch 'integration-8bit' of https://github.com/younesbelkada/transformers
      
       into integration-8bit
      
      * improve documentation
      
      - fix typos for installation
      - change title in the documentation
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarMichael Benayoun <mickbenayoun@gmail.com>
      4a51075a
  11. 08 Aug, 2022 1 commit
  12. 04 Aug, 2022 2 commits
    • Yih-Dar's avatar
    • NielsRogge's avatar
      Add VideoMAE (#17821) · f9a0008d
      NielsRogge authored
      
      
      * First draft
      
      * Add VideoMAEForVideoClassification
      
      * Improve conversion script
      
      * Add VideoMAEForPreTraining
      
      * Add VideoMAEFeatureExtractor
      
      * Improve VideoMAEFeatureExtractor
      
      * Improve docs
      
      * Add first draft of model tests
      
      * Improve VideoMAEForPreTraining
      
      * Fix base_model_prefix
      
      * Make model take pixel_values of shape (B, T, C, H, W)
      
      * Add loss computation of VideoMAEForPreTraining
      
      * Improve tests
      
      * Improve model tests茅
      
      * Make all tests pass
      
      * Add VideoMAE to main README
      
      * Add tests for VideoMAEFeatureExtractor
      
      * Add integration test
      
      * Improve conversion script
      
      * Rename patch embedding class
      
      * Remove VideoMAELayer from init
      
      * Update design of patch embeddings
      
      * Improve comments
      
      * Improve conversion script
      
      * Improve conversion script
      
      * Add conversion of pretrained model
      
      * Add loss verification of pretrained model
      
      * Add loss verification of unnormalized targets
      
      * Add integration test for pretraining model
      
      * Apply suggestions from code review
      
      * Fix bug to make feature extractor resize only shorter edge
      
      * Address more comments
      
      * Improve normalization of videos
      
      * Add doc examples
      
      * Move constants to dedicated script
      
      * Remove scripts
      
      * Transfer checkpoints, fix docs
      
      * Update script
      
      * Update image mean and std
      
      * Fix doc tests
      
      * Set return_tensors to NumPy by default
      
      * Revert the previous change
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      f9a0008d
  13. 01 Aug, 2022 1 commit
  14. 27 Jul, 2022 2 commits
    • Ritik Nandwal's avatar
      Add swin transformer v2 (#17469) · e87ac9d1
      Ritik Nandwal authored
      
      
      * Add files generated using transformer-cli add-new-model-like command
      
      * Add changes for swinv2 attention and forward method
      
      * Add fixes
      
      * Add modifications for weight conversion and remaining args in swin model
      
      * Add changes for patchmerging
      
      * Add changes for SwinV2selfattention
      
      * Update conversion script
      
      * Add final fixes for the swin_v2 model
      
      * Add changes for conversion script for pretrained window size case
      
      * Add pretrained window size value from config in SwinV2Encoder class
      
      * Make fixup
      
      * Add swinv2 to models_not_in_readme to utils/check_copies.py
      
      * Modify Swinv2v2 to Swin Transformer V2
      
      * Remove copied from, to run make fixup command
      
      * Add updates to swinv2tf from main branch
      
      * Add pretrained_window_size to config, to make tests pass
      
      * Add modified weights from nandwalritik profile for swinv2
      
      * Update model weights from swinv2 from nandwalritik profile
      
      * Add fix for build_pr_documentation CI fix
      
      * Add fixes for weight conversion
      
      * Add change to make input with padding work
      
      * Add fixes for test cases
      
      * Add few changes from swin to swinv2 to pass test cases
      
      * Remove tests for tensorflow as swinv2 for TF is not added yet
      
      * Overide test_pt_tf_model_equivalence function as TF implementation for swinv2 is not added yet
      
      * Add modeling_tf_swinv2 to _ignore_modules as test file is removed for this one right now.
      
      * Update docs url for swinv2 in README.md
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Undo changes for check_repo
      
      * Update url in readme.md
      
      * Remove overrided function to test pt_tf_model_equivalence
      
      * Remove TF model imports for Swinv2 as its not implemented in this PR
      
      * Add changes for index.mdx
      
      * Add swinv2 papers link,abstract and contributors details
      
      * Rename cpb_mlp to continous_position_bias_mlp
      
      * Add tips for swinv2 model
      
      * Update src/transformers/models/swinv2/configuration_swinv2.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/swinv2/configuration_swinv2.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Fix indentation for docstring example in src/transformers/models/swinv2/configuration_swinv2.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update import order in src/transformers/models/swinv2/configuration_swinv2.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add copyright statements in weights conversion script.
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Remove Swinv2 from models_not_in_readme
      
      * Reformat code
      
      * Remove TF implementation file for swinv2
      
      * Update start docstring.
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add changes for docstring
      
      * Update orgname for weights to microsoft
      
      * Remove to_2tuple function
      
      * Add copied from statements wherever applicable
      
      * Add copied from to Swinv2ForMaskedImageModelling class
      
      * Reformat code.
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add unittest.skip(with reason.) for test_inputs_embeds test case.
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add updates for test_modeling_swinv2.py
      
      * Add @unittest.skip() annotation for clarity to create_and_test_config_common_properties function
      
      * Add continuous_position_bias_mlp parameter to conversion script
      
      * Add test for testing masked_image_modelling for swinv2
      
      * Update Swinv2 to Swin Transformer v2 in docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update Swinv2 to Swin Transformer v2 in docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add suggested changes
      
      * Add copied from to forward methods of Swinv2Stage and Swinv2Encoder
      
      * Add push_to_hub flag to weight conversion script
      
      * Change order or Swinv2DropPath class
      
      * Add id2label mapping for imagenet 21k
      
      * Add updated url for SwinV2 functions and classes used in implementation
      
      * Update input_feature dimensions format, mentioned in comments.
      Co-authored-by: default avatarAlara Dirik <8944735+alaradirik@users.noreply.github.com>
      
      * Add suggested changes for modeling_swin2.py
      
      * Update docs
      
      * Remove create_and_test_config_common_properties function, as test_model_common_attributes is sufficient.
      
      * Fix indentation.
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Add changes for making Nit objects in code style
      
      * Add suggested changes
      
      * Add suggested changes for test_modelling_swinv2
      
      * make fix-copies
      
      * Update docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarAlara Dirik <8944735+alaradirik@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      e87ac9d1
    • Lysandre's avatar
      Dev version · c89a592e
      Lysandre authored
      c89a592e
  15. 26 Jul, 2022 1 commit
  16. 22 Jul, 2022 1 commit
    • Alara Dirik's avatar
      Add OWL-ViT model for zero-shot object detection (#17938) · 12d66b47
      Alara Dirik authored
      * add owlvit model skeleton
      
      * add class and box predictor heads
      
      * convert modified flax clip to pytorch
      
      * fix box and class predictors
      
      * add OwlViTImageTextEmbedder
      
      * convert class and box head checkpoints
      
      * convert image text embedder checkpoints
      
      * add object detection head
      
      * fix bugs
      
      * update conversion script
      
      * update conversion script
      
      * fix q,v,k,out weight conversion conversion
      
      * add owlvit object detection output
      
      * fix bug in image embedder
      
      * fix bugs in text embedder
      
      * fix positional embeddings
      
      * fix bug in inference mode vision pooling
      
      * update docs, init tokenizer and processor files
      
      * support batch processing
      
      * add OwlViTProcessor
      
      * remove merge conflicts
      
      * readd owlvit imports
      
      * fix bug in OwlViTProcessor imports
      
      * fix bugs in processor
      
      * update docs
      
      * fix bugs in processor
      
      * update owlvit docs
      
      * add OwlViTFeatureExtractor
      
      * style changes, add postprocess method to feature extractor
      
      * add feature extractor and processor tests
      
      * add object detection tests
      
      * update conversion script
      
      * update config paths
      
      * update config paths
      
      * fix configuration paths and bugs
      
      * fix bugs in OwlViT tests
      
      * add import checks to processor
      
      * fix docs and minor issues
      
      * fix docs and minor issues
      
      * fix bugs and issues
      
      * fix bugs and issues
      
      * fix bugs and issues
      
      * fix bugs and issues
      
      * update docs and examples
      
      * fix bugs and issues
      
      * update conversion script, fix positional embeddings
      
      * process 2D input ids, update tests
      
      * fix style and quality issues
      
      * update docs
      
      * update docs and imports
      
      * update OWL-ViT index.md
      
      * fix bug in OwlViT feature ext tests
      
      * fix code examples, return_dict by default
      
      * return_dict by default
      
      * minor fixes, add tests to processor
      
      * small fixes
      
      * add output_attentions arg to main model
      
      * fix bugs
      
      * remove output_hidden_states arg from main model
      
      * update self.config variables
      
      * add option to return last_hidden_states
      
      * fix bug in config variables
      
      * fix copied from statements
      
      * fix small issues and bugs
      
      * fix bugs
      
      * fix bugs, support greyscale images
      
      * run fixup
      
      * update repo name
      
      * merge OwlViTImageTextEmbedder with obj detection head
      
      * fix merge conflict
      
      * fix merge conflict
      
      * make fixup
      
      * fix bugs
      
      * fix bugs
      
      * add additional processor test
      12d66b47
  17. 21 Jul, 2022 1 commit
    • Sayak Paul's avatar
      [SegFormer] TensorFlow port (#17910) · 561b9a8c
      Sayak Paul authored
      
      
      * add: segformer utils and img. classification.
      
      * add: segmentation layer.
      
      * feat: working implementation of segformer.
      
      * chore: remove unused variable.
      
      * add test, remaining modifications.
      
      * remove: unnecessary files.
      
      * add: rest of the files.
      Co-authored-by: default avatarmatt <rocketknight1@gmail.com>
      
      * chore: remove ModuleList comment.
      
      * chore: apply make style.
      
      * chore: apply make fixup-copies.
      
      * add  to check_repo.py
      
      * add decode head to IGNORE_NON_TESTED
      
      * chore: run make style.
      
      * chore: PR comments.
      
      * chore: minor changes to model doc.
      
      * tests: reduction across samples.
      
      * add a note on the space.
      
      * sort importats.
      
      * fix: reduction in loss computation.
      
      * chore: align loss function with that of NER.
      
      * chore: correct utils/documentation_tests.txt
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * chore: simplify the interpolation of logits in loss computation.
      
      * chore: return transposed logits when return_dict=False.
      
      * chore: add link to the tf fine-tuning repo.
      
      * address pr comments.
      
      * address niels's comments.
      
      * remove from_pt=True since tf weights are in.
      
      * remove comment from pt model.
      
      * address niels's comments.
      Co-authored-by: default avatarmatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      561b9a8c
  18. 20 Jul, 2022 1 commit
    • Raghavan's avatar
      Adding OPTForSeqClassification class (#18123) · dcec4c43
      Raghavan authored
      * Adding OPTForSeqClassification class
      
      * Fix import issues
      
      * Add documentation for optforseqclassification
      
      * Remove checkout
      
      * fix failing tests
      
      * fix typo
      
      * Fix code formatting
      
      * Incorporating the PR feedbacks
      
      * Incorporate PR Feedbacks
      
      * Fix failing test and add new test for multi label setup
      
      * Fix formatting issue
      
      * Fix failing tests
      
      * Fix formatting issues
      
      * Fix failing tests
      
      * Fix failing tests
      
      * Fix failing tests
      
      * Fix failing tests
      
      * PR feedback
      dcec4c43
  19. 18 Jul, 2022 1 commit
  20. 13 Jul, 2022 1 commit
  21. 04 Jul, 2022 1 commit
  22. 29 Jun, 2022 4 commits
  23. 28 Jun, 2022 1 commit
  24. 27 Jun, 2022 1 commit
    • Matt's avatar
      Add a TF in-graph tokenizer for BERT (#17701) · ee0d001d
      Matt authored
      * Add a TF in-graph tokenizer for BERT
      
      * Add from_pretrained
      
      * Add proper truncation, option handling to match other tokenizers
      
      * Add proper imports and guards
      
      * Add test, fix all the bugs exposed by said test
      
      * Fix truncation of paired texts in graph mode, more test updates
      
      * Small fixes, add a (very careful) test for savedmodel
      
      * Add tensorflow-text dependency, make fixup
      
      * Update documentation
      
      * Update documentation
      
      * make fixup
      
      * Slight changes to tests
      
      * Add some docstring examples
      
      * Update tests
      
      * Update tests and add proper lowercasing/normalization
      
      * make fixup
      
      * Add docstring for padding!
      
      * Mark slow tests
      
      * make fixup
      
      * Fall back to BertTokenizerFast if BertTokenizer is unavailable
      
      * Fall back to BertTokenizerFast if BertTokenizer is unavailable
      
      * make fixup
      
      * Properly handle tensorflow-text dummies
      ee0d001d
  25. 24 Jun, 2022 1 commit
    • rooa's avatar
      Add CodeGen model (#17443) · d6b6fb99
      rooa authored
      
      
      * Add CodeGen model
      
      * Add missing key and switch order of super()
      
      * Fix torch.ones init with uint8 instead of bool
      
      * Address comments: copy statements and doc
      
      * update tests
      
      * remove old model parallel
      
      * fix batch gen tests
      
      * fix batch gen test
      
      * update test_gpt2_sample_max_time
      
      * fix codgen test and revert gpt2 test change
      
      * Fix incorrect tie_word_embedding value, typo, URL
      
      * Fix model order in README and styling
      
      * Reorder model list alphabetically
      
      * Set tie_word_embedding to False by default
      
      * Apply suggestions from code review
      
      * Better attn mask name & remove attn masked_bias
      
      * add tokenizer for codegen
      
      * quality
      
      * doc tokenizer
      
      * fix-copies
      
      * add CodeGenTokenizer in converter
      
      * make truncation optional
      
      * add test for truncation
      
      * add copyright
      
      * fix-copies
      
      * fix fast tokenizer decode
      
      * Update src/transformers/models/codegen/tokenization_codegen.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * increase vocab_size in tests
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d6b6fb99
  26. 23 Jun, 2022 1 commit
  27. 16 Jun, 2022 1 commit
  28. 14 Jun, 2022 1 commit
  29. 13 Jun, 2022 2 commits
    • Daniel Stancl's avatar
      Add `LongT5` model (#16792) · a72f1c9f
      Daniel Stancl authored
      
      
      * Initial commit
      
      * Make some fixes
      
      * Make PT model full forward pass
      
      * Drop TF & Flax implementation, fix copies etc
      
      * Add Flax model and update some corresponding stuff
      
      * Drop some TF things
      
      * Update config and flax local attn
      
      * Add encoder_attention_type to config
      
      * .
      
      * Update docs
      
      * Do some cleansing
      
      * Fix some issues -> make style; add some docs
      
      * Fix position_bias + mask addition + Update tests
      
      * Fix repo consistency
      
      * Fix model consistency by removing flax operation over attn_mask
      
      * [WIP] Add PT TGlobal LongT5
      
      * .
      
      * [WIP] Add flax tglobal model
      
      * [WIP] Update flax model to use the right attention type in the encoder
      
      * Fix flax tglobal model forward pass
      
      * Make the use of global_relative_attention_bias
      
      * Add test suites for TGlobal model
      
      * Fix minor bugs, clean code
      
      * Fix pt-flax equivalence though not convinced with correctness
      
      * Fix LocalAttn implementation to match the original impl. + update READMEs
      
      * Few updates
      
      * Update: [Flax] improve large model init and loading #16148
      
      * Add ckpt conversion script accoring to #16853 + handle torch device placement
      
      * Minor updates to conversion script.
      
      * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM
      
      * gpu support + dtype fix
      
      * Apply some suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * * Remove (de)parallelize stuff
      * Edit shape comments
      * Update README.md
      * make fix-copies
      
      * Remove caching logic for local & tglobal attention
      
      * Apply another batch of suggestions from code review
      
      * Add missing checkpoints
      * Format converting scripts
      * Drop (de)parallelize links from longT5 mdx
      
      * Fix converting script + revert config file change
      
      * Revert "Remove caching logic for local & tglobal attention"
      
      This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46.
      
      * Stash caching logic in Flax model
      
      * Make side relative bias used always
      
      * Drop caching logic in PT model
      
      * Return side bias as it was
      
      * Drop all remaining model parallel logic
      
      * Remove clamp statements
      
      * Move test files to the proper place
      
      * Update docs with new version of hf-doc-builder
      
      * Fix test imports
      
      * Make some minor improvements
      
      * Add missing checkpoints to docs
      * Make TGlobal model compatible with torch.onnx.export
      * Replace some np.ndarray with jnp.ndarray
      
      * Fix TGlobal for ONNX conversion + update docs
      
      * fix _make_global_fixed_block_ids and masked neg  value
      
      * update flax model
      
      * style and quality
      
      * fix imports
      
      * remove load_tf_weights_in_longt5 from init and fix copies
      
      * add slow test for TGlobal model
      
      * typo fix
      
      * Drop obsolete is_parallelizable and one warning
      
      * Update __init__ files to fix repo-consistency
      
      * fix pipeline test
      
      * Fix some device placements
      
      * [wip]: Update tests -- need to generate summaries to update expected_summary
      
      * Fix quality
      
      * Update LongT5 model card
      
      * Update (slow) summarization tests
      
      * make style
      
      * rename checkpoitns
      
      * finish
      
      * fix flax tests
      Co-authored-by: default avatarphungvanduy <pvduy23@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      a72f1c9f
    • Sijun He's avatar
      Add Visual Question Answering (VQA) pipeline (#17286) · 66336dc1
      Sijun He authored
      
      
      * wip
      
      * rebase
      
      * all tests pass
      
      * rebase
      
      * ready for PR
      
      * address comments
      
      * fix styles
      
      * add require_torch to pipeline test
      
      * remove remote image to improve CI consistency
      
      * address comments; fix tf/flax tests
      
      * address comments; fix tf/flax tests
      
      * fix tests; add alias
      
      * repo consistency tests
      
      * Update src/transformers/pipelines/visual_question_answering.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * address comments
      
      * Update src/transformers/pipelines/visual_question_answering.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * merge
      
      * Update src/transformers/models/auto/modeling_auto.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * merge
      Co-authored-by: default avatarSijun He <sijunhe@Sijuns-MacBook-Pro.local>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      66336dc1
  30. 09 Jun, 2022 1 commit