1. 27 Jul, 2023 1 commit
    • Sanchit Gandhi's avatar
      Add bloom flax (#25094) · e9310363
      Sanchit Gandhi authored
      
      
      * First commit
      
      * step 1 working
      
      * add alibi
      
      * placeholder for `scan`
      
      * add matrix mult alibi
      
      * beta scaling factor for bmm
      
      * working v1 - simple forward pass
      
      * move layer_number from attribute to arg in call
      
      * partial functioning scan
      
      * hacky working scan
      
      * add more modifs
      
      * add test
      
      * update scan for new kwarg order
      
      * fix position_ids problem
      
      * fix bug in attention layer
      
      * small fix
      
      - do the alibi broadcasting only once
      
      * prelim refactor
      
      * finish refactor
      
      * alibi shifting
      
      * incorporate dropout_add to attention module
      
      * make style
      
      * make padding work again
      
      * update
      
      * remove bogus file
      
      * up
      
      * get generation to work
      
      * clean code a bit
      
      * added small tests
      
      * adding albii test
      
      * make CI tests pass:
      
      - change init weight
      - add correct tuple for output attention
      - add scan test
      - make CI tests work
      
      * fix few nits
      
      * fix nit onnx
      
      * fix onnx nit
      
      * add missing dtype args to nn.Modules
      
      * remove debugging statements
      
      * fix scan generate
      
      * Update modeling_flax_bloom.py
      
      * Update test_modeling_flax_bloom.py
      
      * Update test_modeling_flax_bloom.py
      
      * Update test_modeling_flax_bloom.py
      
      * fix small test issue + make style
      
      * clean up
      
      * Update tests/models/bloom/test_modeling_flax_bloom.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * fix function name
      
      * small fix test
      
      * forward contrib credits from PR17761
      
      * Fix failing test
      
      * fix small typo documentation
      
      * fix non passing test
      
      - remove device from build alibi
      
      * refactor call
      
      - refactor `FlaxBloomBlockCollection` module
      
      * make style
      
      * upcast to fp32
      
      * cleaner way to upcast
      
      * remove unused args
      
      * remove layer number
      
      * fix scan test
      
      * make style
      
      * fix i4 casting
      
      * fix slow test
      
      * Update src/transformers/models/bloom/modeling_flax_bloom.py
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * remove `layer_past`
      
      * refactor a bit
      
      * fix `scan` slow test
      
      * remove useless import
      
      * major changes
      
      - remove unused code
      - refactor a bit
      - revert import `torch`
      
      * major refactoring
      
      - change build alibi
      
      * remove scan
      
      * fix tests
      
      * make style
      
      * clean-up alibi
      
      * add integration tests
      
      * up
      
      * fix batch norm conversion
      
      * style
      
      * style
      
      * update pt-fx cross tests
      
      * update copyright
      
      * Update src/transformers/modeling_flax_pytorch_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * per-weight check
      
      * style
      
      * line formats
      
      ---------
      Co-authored-by: default avataryounesbelkada <younesbelkada@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarYounes Belkada <49240599+younesbelkada@users.noreply.github.com>
      Co-authored-by: default avatarhaileyschoelkopf <haileyschoelkopf@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      e9310363
  2. 03 Jul, 2023 1 commit
    • Arthur's avatar
      [`Umt5`] Add google's umt5 to `transformers` (#24477) · 799df10a
      Arthur authored
      
      
      * add tokenization template
      
      * update conversion script
      
      * update modeling code
      
      * update
      
      * update convert checkpoint
      
      * update modeling
      
      * revert changes on convert script
      
      * new conversion script for new format
      
      * correct position bias
      
      * cleaning a bit
      
      * Credit co authors
      Co-authored-by: default avataragemagician <ahmed.elnaggar@tum.de>
      
      Co-authored-by: stefan-it
      <>
      
      * styling
      
      * Add docq
      
      * fix copies
      
      * add co author
      
      * Other Author
      
      * Merge branch 'main' of https://github.com/huggingface/transformers
      
       into add-umt5
      
      * add testing
      
      * nit
      
      * Update docs/source/en/model_doc/umt5.mdx
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      
      * fix t5
      
      * actual fix?
      
      * revert wrong changes
      
      * remove
      
      * update test
      
      * more fixes
      
      * revert some changes
      
      * add SPIECE_UNDERLINE
      
      * add a commone xample
      
      * upfate
      
      * fix copies
      
      * revert changes on t5 conversion script
      
      * revert bytefallback changes since there was no addition yet
      
      * fixup
      
      * fixup
      
      * ingore umt5 cutom testing folder
      
      * fix readmes
      
      * revertT5 changes
      
      * same outputs
      
      * fixup
      
      * update example
      
      * Apply suggestions from code review
      
      * style
      
      * draft addition of all new files
      
      * current update
      
      * fix attention and stuff
      
      * finish refactoring
      
      * auto config
      
      * fixup
      
      * more nits
      
      * add umt5 to init
      
      * use md format
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * revert changes on mt5
      
      * revert mt4 changes
      
      * update test
      
      * more fixes
      
      * add to mapping
      
      * fix-copies
      
      * fix copies
      
      * foix retain grad
      
      * fix some tests
      
      * nits
      
      * done
      
      * Update src/transformers/models/umt5/modeling_umt5.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/umt5.md
      
      * Update src/transformers/models/umt5/__init__.py
      
      * Update docs/source/en/model_doc/umt5.md
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      
      * Update src/transformers/models/umt5/modeling_umt5.py
      
      * update conversion script + use google checkpoints
      
      * nits
      
      * update test and modelling
      
      * stash slow convert
      
      * update fixupd
      
      * don't change slow
      
      ---------
      
      Co-authored-by: stefan-it <>
      Co-authored-by: default avatarStefan Schweter <stefan@schweter.it>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      799df10a
  3. 20 Jun, 2023 1 commit
  4. 04 Apr, 2023 1 commit
    • Shubhamai's avatar
      Flax Regnet (#21867) · 90067748
      Shubhamai authored
      * initial commit
      
      * review changes
      
      * post model PR merge
      
      * updating doc
      90067748
  5. 24 Mar, 2023 1 commit
    • Shubhamai's avatar
      Resnet flax (#21472) · a0cbbba3
      Shubhamai authored
      
      
      * [WIP] flax resnet
      
      * added pretrained flax models, results reproducible
      
      * Added pretrained flax models, results reproducible
      
      * working on tests
      
      * no real code change, just some comments
      
      * [flax] adding support for batch norm layers
      
      * fixing bugs related to pt+flax integration
      
      * removing loss from modeling flax output class
      
      * fixing classifier tests
      
      * fixing comments, model output
      
      * cleaning comments
      
      * review changes
      
      * review changes
      
      * Apply suggestions from code review
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * renaming Flax to PyTorch
      
      ---------
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      a0cbbba3
  6. 14 Mar, 2023 1 commit
  7. 01 Mar, 2023 1 commit
    • Alara Dirik's avatar
      Add ALIGN to transformers (#21741) · 269b0549
      Alara Dirik authored
      Adds the ALIGN model to transformers. ALIGN is introduced in "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision" by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
      269b0549
  8. 20 Feb, 2023 2 commits
    • Alara Dirik's avatar
      Add EfficientNet (#21563) · 49ab1623
      Alara Dirik authored
      * Add EfficientNet to transformers
      49ab1623
    • tanreinama's avatar
      add GPTSAN model (reopen) (#21291) · f56174ac
      tanreinama authored
      * add GPTSAN-Japanese
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN (update for review)
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * fix typo in comment text
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * add GPTSAN
      
      * fix document and comments
      
      * fix class name GPTSAN->GPTSan
      
      * fix import and test for tokenizer
      f56174ac
  9. 07 Feb, 2023 1 commit
    • Stefan Schweter's avatar
      Add XLM-V to Model Doc (#21498) · 7e51a441
      Stefan Schweter authored
      * doc: introduce new section for XLM-V model
      
      * doc: mention more details for XLM-V integration
      
      * docs: paper abstract in italics, model identifier for base model added
      
      * doc: mention new XLM-V support
      
      * auto: add XLM-V mapping
      
      * doc: run make fix-copies ;)
      7e51a441
  10. 19 Jan, 2023 1 commit
    • Jitesh Jain's avatar
      Add OneFormer Model (#20577) · 5b949623
      Jitesh Jain authored
      * Add Oneformer Model
      
      * Add OneFormer Tests
      
      * Add UNIVERSAL_SEGMENTATION_MAPPING
      
      * Fix config
      
      * 馃悰 Fix error encountered while writing tests
      
      * 馃敤 Fix instance segmentation post processing
      
      * Format Files and Add Documentation
      
      * Add Documentation mdx file
      
      * Run make fixup
      
      * Run make fix-copies
      
      * Remove unnecessary code
      
      * Format modeling_oneformer.py
      
      * Add OneFormer to ImageSegmentationPipeline
      
      * Format files
      
      * Add Demo link to Readme
      
      * Fix fomatting errors
      
      * Fix test failures
      
      * Update Table in index.mdx
      
      * Fix version
      
      * Fix style
      
      * Remove OneFormer from TF
      
      * Fix Imports
      
      * Fix dummy objects
      
      * Fix tests
      
      * Add newline
      
      * Remove OneFormerFeatureExtractor
      
      * Remove CUDA Kernels
      
      * Use AutoBackbone for Swin
      
      * Fix description
      
      * Use Image Processor
      
      * Fix copies
      
      * Fix formatting
      
      * Fix import order
      
      * Fix flake8 errors
      
      * Fix doc errors
      
      * Add Hindi Readme entry
      
      * Update supported backbones
      
      * Update supported backbones
      
      * Undo Changes
      
      * Fix type of config
      
      * Fix isort
      
      * Fix auto.mdx
      
      * Fix swin config
      
      * Replace DinatBackbone with AutoBackbone
      
      * Use SwinBackbone
      
      * Use SwinBackbone
      
      * Fix conversion script
      
      * Fix arguments
      
      * Add argument description
      
      * Fix style
      
      * Add OneFormerProcessor
      
      * Fix OneFormerProcessor Tests
      
      * Fix mapping
      
      * Fix imports
      
      * Fix inits
      
      * Fix style
      
      * Fix comment
      
      * Fix docstring
      
      * Move OneFormer to MultiModal
      
      * Fix Copies
      
      * Remove size divisor
      
      * Fix check_repo.py
      
      * Fix copies
      
      * Add Processor for Testing Pipeline
      
      * Fix padding for tokens
      
      * Fix variables
      
      * Fix formatting with correct black version
      
      * Add Image Processor Test
      
      * Apply suggestions
      
      * Revert common modeling
      
      * Add check for task
      
      * Fix conversion script
      
      * Fix initialization order
      
      * Fix tests
      
      * Undo Pipeline Changes
      
      * Fix layers in MLP
      
      * Fix copies
      
      * Update image paths
      
      * Fix copies
      
      * Apply suggestions
      5b949623
  11. 16 Jan, 2023 1 commit
  12. 21 Nov, 2022 1 commit
  13. 15 Nov, 2022 1 commit
  14. 08 Sep, 2022 1 commit
  15. 11 Aug, 2022 1 commit
    • flozi00's avatar
      german docs translation (#18544) · 5d3f0374
      flozi00 authored
      * Create _config.py
      
      * Create _toctree.yml
      
      * Create index.mdx
      
      not sure about "du / ihr" oder "sie"
      
      * Create quicktour.mdx
      
      * Update _toctree.yml
      
      * Update build_documentation.yml
      
      * Update build_pr_documentation.yml
      
      * fix build
      
      * Update index.mdx
      
      * Update quicktour.mdx
      
      * Create installation.mdx
      
      * Update _toctree.yml
      5d3f0374
  16. 04 Aug, 2022 1 commit
    • NielsRogge's avatar
      Add VideoMAE (#17821) · f9a0008d
      NielsRogge authored
      
      
      * First draft
      
      * Add VideoMAEForVideoClassification
      
      * Improve conversion script
      
      * Add VideoMAEForPreTraining
      
      * Add VideoMAEFeatureExtractor
      
      * Improve VideoMAEFeatureExtractor
      
      * Improve docs
      
      * Add first draft of model tests
      
      * Improve VideoMAEForPreTraining
      
      * Fix base_model_prefix
      
      * Make model take pixel_values of shape (B, T, C, H, W)
      
      * Add loss computation of VideoMAEForPreTraining
      
      * Improve tests
      
      * Improve model tests茅
      
      * Make all tests pass
      
      * Add VideoMAE to main README
      
      * Add tests for VideoMAEFeatureExtractor
      
      * Add integration test
      
      * Improve conversion script
      
      * Rename patch embedding class
      
      * Remove VideoMAELayer from init
      
      * Update design of patch embeddings
      
      * Improve comments
      
      * Improve conversion script
      
      * Improve conversion script
      
      * Add conversion of pretrained model
      
      * Add loss verification of pretrained model
      
      * Add loss verification of unnormalized targets
      
      * Add integration test for pretraining model
      
      * Apply suggestions from code review
      
      * Fix bug to make feature extractor resize only shorter edge
      
      * Address more comments
      
      * Improve normalization of videos
      
      * Add doc examples
      
      * Move constants to dedicated script
      
      * Remove scripts
      
      * Transfer checkpoints, fix docs
      
      * Update script
      
      * Update image mean and std
      
      * Fix doc tests
      
      * Set return_tensors to NumPy by default
      
      * Revert the previous change
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      f9a0008d
  17. 27 Jul, 2022 1 commit
    • Ritik Nandwal's avatar
      Add swin transformer v2 (#17469) · e87ac9d1
      Ritik Nandwal authored
      
      
      * Add files generated using transformer-cli add-new-model-like command
      
      * Add changes for swinv2 attention and forward method
      
      * Add fixes
      
      * Add modifications for weight conversion and remaining args in swin model
      
      * Add changes for patchmerging
      
      * Add changes for SwinV2selfattention
      
      * Update conversion script
      
      * Add final fixes for the swin_v2 model
      
      * Add changes for conversion script for pretrained window size case
      
      * Add pretrained window size value from config in SwinV2Encoder class
      
      * Make fixup
      
      * Add swinv2 to models_not_in_readme to utils/check_copies.py
      
      * Modify Swinv2v2 to Swin Transformer V2
      
      * Remove copied from, to run make fixup command
      
      * Add updates to swinv2tf from main branch
      
      * Add pretrained_window_size to config, to make tests pass
      
      * Add modified weights from nandwalritik profile for swinv2
      
      * Update model weights from swinv2 from nandwalritik profile
      
      * Add fix for build_pr_documentation CI fix
      
      * Add fixes for weight conversion
      
      * Add change to make input with padding work
      
      * Add fixes for test cases
      
      * Add few changes from swin to swinv2 to pass test cases
      
      * Remove tests for tensorflow as swinv2 for TF is not added yet
      
      * Overide test_pt_tf_model_equivalence function as TF implementation for swinv2 is not added yet
      
      * Add modeling_tf_swinv2 to _ignore_modules as test file is removed for this one right now.
      
      * Update docs url for swinv2 in README.md
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Undo changes for check_repo
      
      * Update url in readme.md
      
      * Remove overrided function to test pt_tf_model_equivalence
      
      * Remove TF model imports for Swinv2 as its not implemented in this PR
      
      * Add changes for index.mdx
      
      * Add swinv2 papers link,abstract and contributors details
      
      * Rename cpb_mlp to continous_position_bias_mlp
      
      * Add tips for swinv2 model
      
      * Update src/transformers/models/swinv2/configuration_swinv2.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/swinv2/configuration_swinv2.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Fix indentation for docstring example in src/transformers/models/swinv2/configuration_swinv2.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update import order in src/transformers/models/swinv2/configuration_swinv2.py
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add copyright statements in weights conversion script.
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Remove Swinv2 from models_not_in_readme
      
      * Reformat code
      
      * Remove TF implementation file for swinv2
      
      * Update start docstring.
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add changes for docstring
      
      * Update orgname for weights to microsoft
      
      * Remove to_2tuple function
      
      * Add copied from statements wherever applicable
      
      * Add copied from to Swinv2ForMaskedImageModelling class
      
      * Reformat code.
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add unittest.skip(with reason.) for test_inputs_embeds test case.
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add updates for test_modeling_swinv2.py
      
      * Add @unittest.skip() annotation for clarity to create_and_test_config_common_properties function
      
      * Add continuous_position_bias_mlp parameter to conversion script
      
      * Add test for testing masked_image_modelling for swinv2
      
      * Update Swinv2 to Swin Transformer v2 in docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update Swinv2 to Swin Transformer v2 in docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Add suggested changes
      
      * Add copied from to forward methods of Swinv2Stage and Swinv2Encoder
      
      * Add push_to_hub flag to weight conversion script
      
      * Change order or Swinv2DropPath class
      
      * Add id2label mapping for imagenet 21k
      
      * Add updated url for SwinV2 functions and classes used in implementation
      
      * Update input_feature dimensions format, mentioned in comments.
      Co-authored-by: default avatarAlara Dirik <8944735+alaradirik@users.noreply.github.com>
      
      * Add suggested changes for modeling_swin2.py
      
      * Update docs
      
      * Remove create_and_test_config_common_properties function, as test_model_common_attributes is sufficient.
      
      * Fix indentation.
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Add changes for making Nit objects in code style
      
      * Add suggested changes
      
      * Add suggested changes for test_modelling_swinv2
      
      * make fix-copies
      
      * Update docs/source/en/model_doc/swinv2.mdx
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarAlara Dirik <8944735+alaradirik@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      e87ac9d1
  18. 22 Jul, 2022 1 commit
    • Alara Dirik's avatar
      Add OWL-ViT model for zero-shot object detection (#17938) · 12d66b47
      Alara Dirik authored
      * add owlvit model skeleton
      
      * add class and box predictor heads
      
      * convert modified flax clip to pytorch
      
      * fix box and class predictors
      
      * add OwlViTImageTextEmbedder
      
      * convert class and box head checkpoints
      
      * convert image text embedder checkpoints
      
      * add object detection head
      
      * fix bugs
      
      * update conversion script
      
      * update conversion script
      
      * fix q,v,k,out weight conversion conversion
      
      * add owlvit object detection output
      
      * fix bug in image embedder
      
      * fix bugs in text embedder
      
      * fix positional embeddings
      
      * fix bug in inference mode vision pooling
      
      * update docs, init tokenizer and processor files
      
      * support batch processing
      
      * add OwlViTProcessor
      
      * remove merge conflicts
      
      * readd owlvit imports
      
      * fix bug in OwlViTProcessor imports
      
      * fix bugs in processor
      
      * update docs
      
      * fix bugs in processor
      
      * update owlvit docs
      
      * add OwlViTFeatureExtractor
      
      * style changes, add postprocess method to feature extractor
      
      * add feature extractor and processor tests
      
      * add object detection tests
      
      * update conversion script
      
      * update config paths
      
      * update config paths
      
      * fix configuration paths and bugs
      
      * fix bugs in OwlViT tests
      
      * add import checks to processor
      
      * fix docs and minor issues
      
      * fix docs and minor issues
      
      * fix bugs and issues
      
      * fix bugs and issues
      
      * fix bugs and issues
      
      * fix bugs and issues
      
      * update docs and examples
      
      * fix bugs and issues
      
      * update conversion script, fix positional embeddings
      
      * process 2D input ids, update tests
      
      * fix style and quality issues
      
      * update docs
      
      * update docs and imports
      
      * update OWL-ViT index.md
      
      * fix bug in OwlViT feature ext tests
      
      * fix code examples, return_dict by default
      
      * return_dict by default
      
      * minor fixes, add tests to processor
      
      * small fixes
      
      * add output_attentions arg to main model
      
      * fix bugs
      
      * remove output_hidden_states arg from main model
      
      * update self.config variables
      
      * add option to return last_hidden_states
      
      * fix bug in config variables
      
      * fix copied from statements
      
      * fix small issues and bugs
      
      * fix bugs
      
      * fix bugs, support greyscale images
      
      * run fixup
      
      * update repo name
      
      * merge OwlViTImageTextEmbedder with obj detection head
      
      * fix merge conflict
      
      * fix merge conflict
      
      * make fixup
      
      * fix bugs
      
      * fix bugs
      
      * add additional processor test
      12d66b47
  19. 21 Jul, 2022 1 commit
    • Sayak Paul's avatar
      [SegFormer] TensorFlow port (#17910) · 561b9a8c
      Sayak Paul authored
      
      
      * add: segformer utils and img. classification.
      
      * add: segmentation layer.
      
      * feat: working implementation of segformer.
      
      * chore: remove unused variable.
      
      * add test, remaining modifications.
      
      * remove: unnecessary files.
      
      * add: rest of the files.
      Co-authored-by: default avatarmatt <rocketknight1@gmail.com>
      
      * chore: remove ModuleList comment.
      
      * chore: apply make style.
      
      * chore: apply make fixup-copies.
      
      * add  to check_repo.py
      
      * add decode head to IGNORE_NON_TESTED
      
      * chore: run make style.
      
      * chore: PR comments.
      
      * chore: minor changes to model doc.
      
      * tests: reduction across samples.
      
      * add a note on the space.
      
      * sort importats.
      
      * fix: reduction in loss computation.
      
      * chore: align loss function with that of NER.
      
      * chore: correct utils/documentation_tests.txt
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * chore: simplify the interpolation of logits in loss computation.
      
      * chore: return transposed logits when return_dict=False.
      
      * chore: add link to the tf fine-tuning repo.
      
      * address pr comments.
      
      * address niels's comments.
      
      * remove from_pt=True since tf weights are in.
      
      * remove comment from pt model.
      
      * address niels's comments.
      Co-authored-by: default avatarmatt <rocketknight1@gmail.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      561b9a8c
  20. 18 Jul, 2022 1 commit
  21. 13 Jul, 2022 1 commit
  22. 04 Jul, 2022 1 commit
  23. 29 Jun, 2022 3 commits
  24. 28 Jun, 2022 1 commit
  25. 24 Jun, 2022 1 commit
    • rooa's avatar
      Add CodeGen model (#17443) · d6b6fb99
      rooa authored
      
      
      * Add CodeGen model
      
      * Add missing key and switch order of super()
      
      * Fix torch.ones init with uint8 instead of bool
      
      * Address comments: copy statements and doc
      
      * update tests
      
      * remove old model parallel
      
      * fix batch gen tests
      
      * fix batch gen test
      
      * update test_gpt2_sample_max_time
      
      * fix codgen test and revert gpt2 test change
      
      * Fix incorrect tie_word_embedding value, typo, URL
      
      * Fix model order in README and styling
      
      * Reorder model list alphabetically
      
      * Set tie_word_embedding to False by default
      
      * Apply suggestions from code review
      
      * Better attn mask name & remove attn masked_bias
      
      * add tokenizer for codegen
      
      * quality
      
      * doc tokenizer
      
      * fix-copies
      
      * add CodeGenTokenizer in converter
      
      * make truncation optional
      
      * add test for truncation
      
      * add copyright
      
      * fix-copies
      
      * fix fast tokenizer decode
      
      * Update src/transformers/models/codegen/tokenization_codegen.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * increase vocab_size in tests
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d6b6fb99
  26. 23 Jun, 2022 1 commit
  27. 21 Jun, 2022 1 commit
  28. 15 Jun, 2022 1 commit
  29. 13 Jun, 2022 1 commit
    • Daniel Stancl's avatar
      Add `LongT5` model (#16792) · a72f1c9f
      Daniel Stancl authored
      
      
      * Initial commit
      
      * Make some fixes
      
      * Make PT model full forward pass
      
      * Drop TF & Flax implementation, fix copies etc
      
      * Add Flax model and update some corresponding stuff
      
      * Drop some TF things
      
      * Update config and flax local attn
      
      * Add encoder_attention_type to config
      
      * .
      
      * Update docs
      
      * Do some cleansing
      
      * Fix some issues -> make style; add some docs
      
      * Fix position_bias + mask addition + Update tests
      
      * Fix repo consistency
      
      * Fix model consistency by removing flax operation over attn_mask
      
      * [WIP] Add PT TGlobal LongT5
      
      * .
      
      * [WIP] Add flax tglobal model
      
      * [WIP] Update flax model to use the right attention type in the encoder
      
      * Fix flax tglobal model forward pass
      
      * Make the use of global_relative_attention_bias
      
      * Add test suites for TGlobal model
      
      * Fix minor bugs, clean code
      
      * Fix pt-flax equivalence though not convinced with correctness
      
      * Fix LocalAttn implementation to match the original impl. + update READMEs
      
      * Few updates
      
      * Update: [Flax] improve large model init and loading #16148
      
      * Add ckpt conversion script accoring to #16853 + handle torch device placement
      
      * Minor updates to conversion script.
      
      * Typo: AutoModelForSeq2SeqLM -> FlaxAutoModelForSeq2SeqLM
      
      * gpu support + dtype fix
      
      * Apply some suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * * Remove (de)parallelize stuff
      * Edit shape comments
      * Update README.md
      * make fix-copies
      
      * Remove caching logic for local & tglobal attention
      
      * Apply another batch of suggestions from code review
      
      * Add missing checkpoints
      * Format converting scripts
      * Drop (de)parallelize links from longT5 mdx
      
      * Fix converting script + revert config file change
      
      * Revert "Remove caching logic for local & tglobal attention"
      
      This reverts commit 2a619828f6ddc3e65bd9bb1725a12b77fa883a46.
      
      * Stash caching logic in Flax model
      
      * Make side relative bias used always
      
      * Drop caching logic in PT model
      
      * Return side bias as it was
      
      * Drop all remaining model parallel logic
      
      * Remove clamp statements
      
      * Move test files to the proper place
      
      * Update docs with new version of hf-doc-builder
      
      * Fix test imports
      
      * Make some minor improvements
      
      * Add missing checkpoints to docs
      * Make TGlobal model compatible with torch.onnx.export
      * Replace some np.ndarray with jnp.ndarray
      
      * Fix TGlobal for ONNX conversion + update docs
      
      * fix _make_global_fixed_block_ids and masked neg  value
      
      * update flax model
      
      * style and quality
      
      * fix imports
      
      * remove load_tf_weights_in_longt5 from init and fix copies
      
      * add slow test for TGlobal model
      
      * typo fix
      
      * Drop obsolete is_parallelizable and one warning
      
      * Update __init__ files to fix repo-consistency
      
      * fix pipeline test
      
      * Fix some device placements
      
      * [wip]: Update tests -- need to generate summaries to update expected_summary
      
      * Fix quality
      
      * Update LongT5 model card
      
      * Update (slow) summarization tests
      
      * make style
      
      * rename checkpoitns
      
      * finish
      
      * fix flax tests
      Co-authored-by: default avatarphungvanduy <pvduy23@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      a72f1c9f
  30. 09 Jun, 2022 1 commit
  31. 07 Jun, 2022 1 commit
    • Chan Woo Kim's avatar
      M-CTC-T Model (#16402) · 119e3c0f
      Chan Woo Kim authored
      
      
      * added cbs to notebooks, made copy-paste error fix in generation_utils
      
      * initial push for mctc model
      
      * mctc feature extractor done
      
      * added processor, tokenizer and their tests for MCTC. Have added an MCTC modeling test, adjusting model code accordingly.
      
      * added processor, tokenizer and their tests for MCTC. Have added an MCTC modeling test, adjusting model code accordingly.
      
      * passing attention, now struggling to figure out how attention masks make sense here
      
      * works when excluding attention masks. ask later how one would integrate attention maskshere
      
      * bizarre configuration error (model prefix comes first in config dict json and messes up the order)
      
      * all passing but bizzarre config dict ordering issue when to_dict
      
      * passing all major tests
      
      * feature extraction, processor, tokenizer added & tests passing
      
      * style & consistency & other logistical fixes
      
      * copy paste fix
      
      * model after feature extraction working
      
      * commiting final feature extraction results; need to fix normalization
      
      * feature extraction passing tests; probably should add tests on the specific flashlight-copied functions?
      
      * delete print ; format code a bit
      
      * fixing tests
      
      * passing major tests
      
      * fixing styles
      
      * completed tokenization test with real example; not sure if these values are entirely correct.
      
      * last test fixes from local
      
      * reverting accidentally included custom setup configs
      
      * remove load tf weights; fix config error
      
      * testing couldnt import featureextractor
      
      * fix docs
      
      * fix docs
      
      * resolving comments
      
      * style fixes
      
      * style fixes
      
      * Update to MCTCConv1dSubSampler
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * relposemb fixes
      
      * conv1d name issue; expecting config fail with paraentheses
      
      * fix config issue
      
      * fix config issue
      
      * fix config issue
      
      * change everything to MCTCT
      
      * fixing naming change errors
      
      * archive list
      
      * copyrights and docs
      
      * copyrights and docs
      
      * copyrights and docs
      
      * merge resolution
      
      * move tests, fix to changed optionaldependency structure
      
      * test directories changed
      
      * fixing tests
      
      * how to avoid tf tests?
      
      * how to avoid tf tests?
      
      * tests passing locally
      
      * allow mctctprocessor imported any env
      
      * allow mctctprocessor imported any env
      
      * fixed second round of feedback, need to fix docs
      
      * doc changes not being applied
      
      * all fixed
      
      * style fix
      
      * feedback fixes
      
      * fix copies and feature extraction style fix
      
      * Update tests/models/visual_bert/test_modeling_visual_bert.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * copy paste huggingface:main visual bert
      
      * added eof newline to visual bert; all tests are passing otherwise
      
      * fix slow tests by adding attention mask
      
      * change model id to speechbrain
      
      * make fix-copies
      
      * fix readme unwanted deletes
      
      * fixing readmes, make fix-copies
      
      * consistent M-CTC-T naming
      
      * Update src/transformers/models/mctct/__init__.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * all fixed but variable naming
      
      * adjust double quotes
      
      * fixed variable names
      
      * copyright and mr quilter
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * correct slow tests
      
      * make fix-copies
      
      * Update src/transformers/models/mctct/configuration_mctct.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/mctct/configuration_mctct.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * m-ctc-t not mctct
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      119e3c0f
  32. 03 Jun, 2022 1 commit
    • Britney Muller's avatar
      Update index.mdx (#17547) · 72f5b949
      Britney Muller authored
      This PR updates our Expert Acceleration Program image with a new image featuring our experts.
      
      This is similar to our Transformers/README.md image update that has proven to be successful.
      72f5b949
  33. 02 Jun, 2022 1 commit
  34. 01 Jun, 2022 1 commit
  35. 31 May, 2022 1 commit
    • Arthur's avatar
      Opt in flax and tf (#17388) · 7822a9b7
      Arthur authored
      
      
      * initial commit
      
      * add init file
      
      * update globakl init
      
      * update index and dummy objects
      
      * style
      
      * update modelling auto
      
      * fix initi typo in src/transformers
      
      * fix typo in modeling tf auto, opt was in wrong mapping name
      
      * fixed a slow test : saved_model
      
      * style
      
      * fix positionnal embedding if no position id is provided
      
      * update tf test
      
      * update test flax requirements
      
      * fixed serialization
      
      * update
      
      * update tf name to allow smooth convertion
      
      * update flax tests
      
      * style
      
      * fix test typo
      
      * fix tf typo test
      
      * add xla for generate support in causal LM
      
      * fixed bug
      
      * cleaned tf tests
      
      * style
      
      * removed from PT for slow tests
      
      * fix typp
      
      * opt test as slow
      
      * trying to fix GPT2 undefined
      
      * correct documentation and add to test doc
      
      * update tf doc
      
      * fix doc
      
      * fake commit
      
      * Apply suggestions from code review
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      
      * update test based on review
      
      * merged main layer for functionning test
      
      * fixup + quality
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * update long comment
      
      * make fix copies
      Co-authored-by: default avatarArthur <arthur@huggingface.co>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      7822a9b7
  36. 24 May, 2022 2 commits
    • Jason Phang's avatar
      [WIP] Adding GPT-NeoX-20B (#16659) · 71e60272
      Jason Phang authored
      
      
      * initial
      
      * first try
      
      * working 20B
      
      * 20B tokenizers
      
      * Docs
      
      * Import fixes for missing classes
      
      * Update docs, fixup
      
      * black formatting
      
      * isort
      
      * flake
      
      * dummy objects
      
      * documentation
      
      * Documentation yml
      
      * more docs
      
      * tweaks for tests
      
      * tokenization auto
      
      * fix neox tests
      
      * test
      
      * test
      
      * einsum
      
      * address PR feedback
      
      * Documentation
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_neox/__init__.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/gpt_neox/configuration_gpt_neox.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Remove undefined LaTeX syntax
      
      * Update to full url to avoid confusion about if that's supposed to refer to the Hub
      
      * fix auto
      
      * move tests
      
      * documentation fix
      
      * more doc fixes
      
      * test refactor
      
      * fix import
      
      * fix import
      
      * fix import
      
      * fix import
      
      * fix import
      
      * style fixes
      
      * More modeling fixes
      Co-authored-by: default avatarJason Phang <zp489@gr057.hpc.nyu.edu>
      Co-authored-by: default avatarStella Biderman <stellabiderman@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      71e60272
    • NielsRogge's avatar
      Add LayoutLMv3 (#17060) · 31ee80d5
      NielsRogge authored
      
      
      * Make forward pass work
      
      * More improvements
      
      * Remove unused imports
      
      * Remove timm dependency
      
      * Improve loss calculation of token classifier
      
      * Fix most tests
      
      * Add docs
      
      * Add model integration test
      
      * Make all tests pass
      
      * Add LayoutLMv3FeatureExtractor
      
      * Improve integration test + make fixup
      
      * Add example script
      
      * Fix style
      
      * Add LayoutLMv3Processor
      
      * Fix style
      
      * Add option to add visual labels
      
      * Make more tokenizer tests pass
      
      * Fix more tests
      
      * Make more tests pass
      
      * Fix bug and improve docs
      
      * Fix import of processors
      
      * Improve docstrings
      
      * Fix toctree and improve docs
      
      * Fix auto tokenizer
      
      * Move tests to model folder
      
      * Move tests to model folder
      
      * change default behavior add_prefix_space
      
      * add prefix space for fast
      
      * add_prefix_spcae set to True for Fast
      
      * no space before `unique_no_split` token
      
      * add test to hightligh special treatment of added tokens
      
      * fix `test_batch_encode_dynamic_overflowing` by building a long enough example
      
      * fix `test_full_tokenizer` with add_prefix_token
      
      * Fix tokenizer integration test
      
      * Make the code more readable
      
      * Add tests for LayoutLMv3Processor
      
      * Fix style
      
      * Add model to README and update init
      
      * Apply suggestions from code review
      
      * Replace asserts by value errors
      
      * Add suggestion by @ducviet00
      
      * Add model to doc tests
      
      * Simplify script
      
      * Improve README
      
      * a step ahead to fix
      
      * Update pair_input_test
      
      * Make all tokenizer tests pass - phew
      
      * Make style
      
      * Add LayoutLMv3 to CI job
      
      * Fix auto mapping
      
      * Fix CI job name
      
      * Make all processor tests pass
      
      * Make tests of LayoutLMv2 and LayoutXLM consistent
      
      * Add copied from statements to fast tokenizer
      
      * Add copied from statements to slow tokenizer
      
      * Remove add_visual_labels attribute
      
      * Fix tests
      
      * Add link to notebooks
      
      * Improve docs of LayoutLMv3Processor
      
      * Fix reference to section
      Co-authored-by: default avatarSaulLu <lucilesaul.com@gmail.com>
      Co-authored-by: default avatarNiels Rogge <nielsrogge@Nielss-MacBook-Pro.local>
      31ee80d5