1. 30 Nov, 2021 3 commits
    • Suraj Patil's avatar
      VisionTextDualEncoder (#13511) · fc1d97f2
      Suraj Patil authored
      
      
      * init vision_text_dual_encoder
      
      * fix merge
      
      * remove extra heads
      
      * fix tests
      
      * remove VISION_TEXT_DUAL_ENCODER_PRETRAINED_CONFIG_ARCHIVE_MAP
      
      * remove archive map
      
      * fix imports
      
      * fix more imports
      
      * fix init
      
      * delete tokenizers
      
      * fix imports
      
      * clean
      
      * support clip's vision model
      
      * handle None config
      
      * begin tests
      
      * more test and few fixes
      
      * warn about newly init weights
      
      * more tests
      
      * add loss to model
      
      * remove extra classes from doc
      
      * add processor
      
      * doc and small fixes
      
      * add start docstr
      
      * update flax model
      
      * flax tests
      
      * more flax tests
      
      * doc
      
      * quality
      
      * doc and quality
      
      * fix doc
      
      * doc
      
      * remove comments
      
      * update warning
      
      * quality
      
      * fix docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * replace asserts, fix imports
      
      * update imports
      
      * fix import
      
      * address some review comments
      
      * fix check
      
      * reduce tolerance
      
      * fix test
      
      * add flax integration test
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * address Sylvain's comments
      
      * fix style
      
      * add pt_flax_equivalence test in PT tests
      
      * add pt integration test
      
      * update test
      
      * use pre-trained checkpoint in examples
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      fc1d97f2
    • Daniel Stancl's avatar
      [Flax] Add FlaxBlenderbot (#13633) · faacd747
      Daniel Stancl authored
      
      
      * Init Flax implementation for Blenderbot
      
      * Add a majority of stuff except for tests
      
      * make style quality
      
      * Add tests and fix some bugs
      
      * Add tests
      
      * Clean source code and fix some bugs
      
      * Fix copies and docs
      
      * Fix jax device condition for tests
      
      * Fix layer norm in the encoder
      
      * Fix a few typos in the test file
      
      * make fix-copies
      
      * make fix-copies
      
      * fix layer norm
      
      * Fix Flax params dtype (#13090)
      
      * Fix PR reference (#13098)
      
      * make fix-copies
      
      * Update tests/test_modeling_flax_blenderbot.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      faacd747
    • Kamal Raj's avatar
      Tapas tf (#13393) · c468a87a
      Kamal Raj authored
      * TF Tapas first commit
      
      * updated docs
      
      * updated logger message
      
      * updated pytorch weight conversion
      script to support scalar array
      
      * added use_cache to tapas model config to
      work properly with tf input_processing
      
      * 1. rm embeddings_sum
      2. added # Copied
      3. + TFTapasMLMHead
      4. and lot other small fixes
      
      * updated docs
      
      * + test for tapas
      
      * updated testing_utils to check
      is_tensorflow_probability_available
      
      * converted model logits post processing using
      numpy to work with both PT and TF models
      
      * + TFAutoModelForTableQuestionAnswering
      
      * added TF support
      
      * added test for
      TFAutoModelForTableQuestionAnswering
      
      * added test for
      TFAutoModelForTableQuestionAnswering pipeline
      
      * updated auto model docs
      
      * fixed typo in import
      
      * added tensorflow_probability to run tests
      
      * updated MLM head
      
      * updated tapas.rst with TF  model docs
      
      * fixed optimizer import in docs
      
      * updated convert to np
      data from pt model is not
      `transformers.tokenization_utils_base.BatchEncoding`
      after pipeline upgrade
      
      * updated pipeline:
      1. with torch.no_gard removed, pipeline forward handles
      2. token_type_ids converted to numpy
      
      * updated docs.
      
      * removed `use_cache` from config
      
      * removed floats_tensor
      
      * updated code comment
      
      * updated Copyright Year and
      logits_aggregation Optional
      
      * updated docs and comments
      
      * updated docstring
      
      * fixed model weight loading
      
      * make fixup
      
      * fix indentation
      
      * added tf slow pipeline test
      
      * pip upgrade
      
      * upgrade python to 3.7
      
      * removed from_pt from tests
      
      * revert commit f18cfa9
      c468a87a
  2. 19 Nov, 2021 2 commits
  3. 18 Nov, 2021 1 commit
    • NielsRogge's avatar
      Add ImageGPT (#14240) · da36c557
      NielsRogge authored
      * First draft
      
      * More improvements
      
      * Improve conversion script
      
      * Fix init weights for layer norm
      
      * Fix correct model for conversion script
      
      * Don't tie input and output embeddings
      
      * Add print statements for debugging
      
      * Add print statements for debugging
      
      * Fix vocab size of model
      
      * Improve documentation, remove fast tokenizer
      
      * Add ImageGPTForImageClassification, improve docs
      
      * Fix docs issue
      
      * Set verbosity level back to info
      
      * Improve tests
      
      * Fix tests and add figure
      
      * Delete tokenizer file
      
      * Remove ImageGPTTokenizer from init files
      
      * Remove ImageGPTLayer from init files
      
      * Remove ImageGPT tokenizer from docs
      
      * First draft of ImageGPTFeatureExtractor
      
      * Fix typo
      
      * Fix bug
      
      * More improvements
      
      * Apply suggestions from code review, add tests for feature extractor
      
      * Fix layernorm
      
      * Update save_pretrained method
      
      * Fix issue
      
      * Make all tests of ImageGPTFeatureExtractor pass
      
      * Update code examples
      
      * Rename model inputs to pixel_values
      
      * Improve code examples
      
      * Update init_weights to post_init
      
      * Fix post_init
      da36c557
  4. 09 Nov, 2021 2 commits
    • Yih-Dar's avatar
      Add TFViTModel (#13778) · be4a6c64
      Yih-Dar authored
      
      
      * Start the work for TFViTModel
      
      * Convert to TF code - need to check in the follow up commits
      
      * Clean up model code
      
      * Expose TFViTModel
      
      * make style
      
      * make quality
      
      * Add test
      
      * make style & quality
      
      * Fix some imports
      
      * fix wrong usage - *kwargs => ** kwargs
      
      * Fix Conv2D weight loading (PT->TF) issue
      
      * Add tests for images with different sizes + fix model
      
      * Fix some common tests for TFViTModel
      
      * Use inputs instead of input_ids in test_compile_tf_model
      
      * Add a comment about transpose and Conv2D in convert_tf_weight_name_to_pt_weight_name
      
      * Avoid transpose in TFViT call
      
      * Fix Conv2D issue in load_tf2_weights_in_pytorch_model
      
      * Use tf.keras.layers.Conv2D instead of tf.nn.conv2d
      
      * Using simpler heuristic to detect Conv2D layer
      
      * Change convert_tf_weight_name_to_pt_weight_name to return TransposeType
      
      * Check tf_weight_shape is not None before using it
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix missing comma
      
      * fix input dtype
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      be4a6c64
    • Yih-Dar's avatar
      Add FlaxVisionEncoderDecoderModel (#13359) · 95b3ec3b
      Yih-Dar authored
      
      
      * Start the work on FlaxVisionEncoderDecoderModel
      
      * Add FlaxVisionEncoderDecoderModel
      
      * Add VisionEncoderDecoderConfig
      
      * Make FlaxVisionEncoderDecoderModel visible to transformers
      
      * Add test
      
      * Fix wrong getattr usage
      
      * Fix tests
      
      * Add FlaxAutoModelForVision2Seq
      
      * Expose FLAX_MODEL_FOR_VISION_2_SEQ_MAPPING
      
      * clean-up
      
      * add integration test
      
      * update expected logits
      
      * update expected scores
      
      * Add ViT2GPT2ModelIntegrationTest + some cleaning
      
      * Add projection layer + PT/Flax equivalence tests
      
      * Fix import
      
      * minor changes
      
      * make test slow again
      
      * Apply suggestions
      
      * Add modeling_flax_vision_encoder_decoder to _ignore_modules in get_model_modules()
      
      * fix copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * split long strings in multiple lines
      
      * decoder_input_ids can't be None
      
      * Add back test_configuration_tie
      
      * Remove attention_mask parameter
      
      * fix test - encoder_last_hidden_state should be encoder_outputs.last_hidden_state instead of the projected vector
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Remove more encoder_attention_mask
      
      * remove encoder_attention_mask when calling self.decode (in FlaxVisionEncoderDecoderModule)
      
      * Fix style + pass 1s instead of None as encoder_attention_mask
      
      * fix init_weights
      
      * pass None for encoder_attention_mask
      
      * pass 1s instead of None as encoder_attention_mask
      
      * Fix doc style
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      95b3ec3b
  5. 03 Nov, 2021 1 commit
  6. 29 Oct, 2021 1 commit
    • Daniel Stancl's avatar
      Add `BlenderbotTokenizerFast` (#13720) · d37f1fb8
      Daniel Stancl authored
      * Add the support for the fast (rust) implementation of BlenbderbotTokenizer
      
      * Fix a converter and a typo in a doc
      
      * Apply the patil-suraj's suggestion
      
      * (Nitpick) Fast tokenization -> Fast Tokenization in doc
      
      * Apply the SaulLu's suggestion
      
      * Apply Narsil's suggestion to fix test pipelines
      
      * Add encoder_no_repeat_ngram_size according to the Narsil's suggestion
      
      * Revert the last (unnecessary) commit
      
      * Override pipeline config for Blenderbot to allow for larger pos. emb.
      
      * make fix-copies
      d37f1fb8
  7. 28 Oct, 2021 2 commits
    • Lysandre's avatar
      Release v4.12.0 · 62bf5366
      Lysandre authored
      62bf5366
    • NielsRogge's avatar
      Add SegFormer (#14019) · 1dc96a76
      NielsRogge authored
      
      
      * First draft
      
      * Make style & quality
      
      * Improve conversion script
      
      * Add print statement to see actual slice
      
      * Make absolute tolerance smaller
      
      * Fix image classification models
      
      * Add post_process_semantic method
      
      * Disable padding
      
      * Improve conversion script
      
      * Rename to ForSemanticSegmentation, add integration test, remove post_process methods
      
      * Improve docs
      
      * Fix code quality
      
      * Fix feature extractor tests
      
      * Fix tests for image classification model
      
      * Delete file
      
      * Add is_torch_available to feature extractor
      
      * Improve documentation of feature extractor methods
      
      * Apply suggestions from @sgugger's code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Apply some more suggestions of code review
      
      * Rebase with master
      
      * Fix rebase issues
      
      * Make sure model only outputs hidden states when the user wants to
      
      * Apply suggestions from code review
      
      * Add pad method
      
      * Support padding of 2d images
      
      * Add print statement
      
      * Add print statement
      
      * Move padding method to SegformerFeatureExtractor
      
      * Fix issue
      
      * Add casting of segmentation maps
      
      * Add test for padding
      
      * Add small note about padding
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      1dc96a76
  8. 26 Oct, 2021 1 commit
  9. 18 Oct, 2021 1 commit
  10. 15 Oct, 2021 1 commit
  11. 13 Oct, 2021 1 commit
    • NielsRogge's avatar
      Add TrOCR + VisionEncoderDecoderModel (#13874) · 408b2d2b
      NielsRogge authored
      * First draft
      
      * Update self-attention of RoBERTa as proposition
      
      * Improve conversion script
      
      * Add TrOCR decoder-only model
      
      * More improvements
      
      * Make forward pass with pretrained weights work
      
      * More improvements
      
      * Some more improvements
      
      * More improvements
      
      * Make conversion work
      
      * Clean up print statements
      
      * Add documentation, processor
      
      * Add test files
      
      * Small improvements
      
      * Some more improvements
      
      * Make fix-copies, improve docs
      
      * Make all vision encoder decoder model tests pass
      
      * Make conversion script support other models
      
      * Update URL for OCR image
      
      * Update conversion script
      
      * Fix style & quality
      
      * Add support for the large-printed model
      
      * Fix some issues
      
      * Add print statement for debugging
      
      * Add print statements for debugging
      
      * Make possible fix for sinusoidal embedding
      
      * Further debugging
      
      * Potential fix v2
      
      * Add more print statements for debugging
      
      * Add more print statements for debugging
      
      * Deubg more
      
      * Comment out print statements
      
      * Make conversion of large printed model possible, address review comments
      
      * Make it possible to convert the stage1 checkpoints
      
      * Clean up code, apply suggestions from code review
      
      * Apply suggestions from code review, use Microsoft models in tests
      
      * Rename encoder_hidden_size to cross_attention_hidden_size
      
      * Improve docs
      408b2d2b
  12. 12 Oct, 2021 1 commit
    • Yih-Dar's avatar
      Add TFEncoderDecoderModel + Add cross-attention to some TF models (#13222) · 8b240a06
      Yih-Dar authored
      
      
      * Add cross attentions to TFGPT2Model
      
      * Add TFEncoderDecoderModel
      
      * Add TFBaseModelOutputWithPoolingAndCrossAttentions
      
      * Add cross attentions to TFBertModel
      
      * Fix past or past_key_values argument issue
      
      * Fix generation
      
      * Fix save and load
      
      * Add some checks and comments
      
      * Clean the code that deals with past keys/values
      
      * Add kwargs to processing_inputs
      
      * Add serving_output to TFEncoderDecoderModel
      
      * Some cleaning + fix use_cache value issue
      
      * Fix tests + add bert2bert/bert2gpt2 tests
      
      * Fix more tests
      
      * Ignore crossattention.bias when loading GPT2 weights into TFGPT2
      
      * Fix return_dict_in_generate in tf generation
      
      * Fix is_token_logit_eos_token bug in tf generation
      
      * Finalize the tests after fixing some bugs
      
      * Fix another is_token_logit_eos_token bug in tf generation
      
      * Add/Update docs
      
      * Add TFBertEncoderDecoderModelTest
      
      * Clean test script
      
      * Add TFEncoderDecoderModel to the library
      
      * Add cross attentions to TFRobertaModel
      
      * Add TFRobertaEncoderDecoderModelTest
      
      * make style
      
      * Change the way of position_ids computation
      
      * bug fix
      
      * Fix copies in tf_albert
      
      * Remove some copied from and apply some fix-copies
      
      * Remove some copied
      
      * Add cross attentions to some other TF models
      
      * Remove encoder_hidden_states from TFLayoutLMModel.call for now
      
      * Make style
      
      * Fix TFRemBertForCausalLM
      
      * Revert the change to longformer + Remove copies
      
      * Revert the change to albert and convbert + Remove copies
      
      * make quality
      
      * make style
      
      * Add TFRembertEncoderDecoderModelTest
      
      * make quality and fix-copies
      
      * test TFRobertaForCausalLM
      
      * Fixes for failed tests
      
      * Fixes for failed tests
      
      * fix more tests
      
      * Fixes for failed tests
      
      * Fix Auto mapping order
      
      * Fix TFRemBertEncoder return value
      
      * fix tf_rembert
      
      * Check copies are OK
      
      * Fix missing TFBaseModelOutputWithPastAndCrossAttentions is not defined
      
      * Add TFEncoderDecoderModelSaveLoadTests
      
      * fix tf weight loading
      
      * check the change of use_cache
      
      * Revert the change
      
      * Add missing test_for_causal_lm for TFRobertaModelTest
      
      * Try cleaning past
      
      * fix _reorder_cache
      
      * Revert some files to original versions
      
      * Keep as many copies as possible
      
      * Apply suggested changes - Use raise ValueError instead of assert
      
      * Move import to top
      
      * Fix wrong require_torch
      
      * Replace more assert by raise ValueError
      
      * Add test_pt_tf_model_equivalence (the test won't pass for now)
      
      * add test for loading/saving
      
      * finish
      
      * finish
      
      * Remove test_pt_tf_model_equivalence
      
      * Update tf modeling template
      
      * Remove pooling, added in the prev. commit, from MainLayer
      
      * Update tf modeling test template
      
      * Move inputs["use_cache"] = False to modeling_tf_utils.py
      
      * Fix torch.Tensor in the comment
      
      * fix use_cache
      
      * Fix missing use_cache in ElectraConfig
      
      * Add a note to from_pretrained
      
      * Fix style
      
      * Change test_encoder_decoder_save_load_from_encoder_decoder_from_pt
      
      * Fix TFMLP (in TFGPT2) activation issue
      
      * Fix None past_key_values value in serving_output
      
      * Don't call get_encoderdecoder_model in TFEncoderDecoderModelTest.test_configuration_tie until we have a TF checkpoint on Hub
      
      * Apply review suggestions - style for cross_attns in serving_output
      
      * Apply review suggestions - change assert + docstrings
      
      * break the error message to respect the char limit
      
      * deprecate the argument past
      
      * fix docstring style
      
      * Update the encoder-decoder rst file
      
      * fix Unknown interpreted text role "method"
      
      * fix typo
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8b240a06
  13. 29 Sep, 2021 1 commit
  14. 27 Sep, 2021 1 commit
  15. 22 Sep, 2021 1 commit
  16. 21 Sep, 2021 1 commit
    • Kamal Raj's avatar
      beit-flax (#13515) · a2dec768
      Kamal Raj authored
      * beit-flax
      
      * updated FLAX_BEIT_MLM_DOCSTRING
      
      * removed bool_masked_pos from classification
      
      * updated Copyright
      
      * code refactoring: x -> embeddings
      
      * updated test: rm from_pt
      
      * Update docs/source/model_doc/beit.rst
      
      * model code dtype updates and
      other changes according to review
      
      * relative_position_bias
      revert back to pytorch design
      a2dec768
  17. 20 Sep, 2021 1 commit
    • Gunjan Chhablani's avatar
      Add FNet (#13045) · d8049331
      Gunjan Chhablani authored
      
      
      * Init FNet
      
      * Update config
      
      * Fix config
      
      * Update model classes
      
      * Update tokenizers to use sentencepiece
      
      * Fix errors in model
      
      * Fix defaults in config
      
      * Remove position embedding type completely
      
      * Fix typo and take only real numbers
      
      * Fix type vocab size in configuration
      
      * Add projection layer to embeddings
      
      * Fix position ids bug in embeddings
      
      * Add minor changes
      
      * Add conversion script and remove CausalLM vestiges
      
      * Fix conversion script
      
      * Fix conversion script
      
      * Remove CausalLM Test
      
      * Update checkpoint names to dummy checkpoints
      
      * Add tokenizer mapping
      
      * Fix modeling file and corresponding tests
      
      * Add tokenization test file
      
      * Add PreTraining model test
      
      * Make style and quality
      
      * Make tokenization base tests work
      
      * Update docs
      
      * Add FastTokenizer tests
      
      * Fix fast tokenizer special tokens
      
      * Fix style and quality
      
      * Remove load_tf_weights vestiges
      
      * Add FNet to  main README
      
      * Fix configuration example indentation
      
      * Comment tokenization slow test
      
      * Fix style
      
      * Add changes from review
      
      * Fix style
      
      * Remove bos and eos tokens from tokenizers
      
      * Add tokenizer slow test, TPU transforms, NSP
      
      * Add scipy check
      
      * Add scipy availabilty check to test
      
      * Fix tokenizer and use correct inputs
      
      * Remove remaining TODOs
      
      * Fix tests
      
      * Fix tests
      
      * Comment Fourier Test
      
      * Uncomment Fourier Test
      
      * Change to google checkpoint
      
      * Add changes from review
      
      * Fix activation function
      
      * Fix model integration test
      
      * Add more integration tests
      
      * Add comparison steps to MLM integration test
      
      * Fix style
      
      * Add masked tokenization fix
      
      * Improve mask tokenization fix
      
      * Fix index docs
      
      * Add changes from review
      
      * Fix issue
      
      * Fix failing import in test
      
      * some more fixes
      
      * correct fast tokenizer
      
      * finalize
      
      * make style
      
      * Remove additional tokenization logic
      
      * Set do_lower_case to False
      
      * Allow keeping accents
      
      * Fix tokenization test
      
      * Fix FNet Tokenizer Fast
      
      * fix tests
      
      * make style
      
      * Add tips to FNet docs
      Co-authored-by: default avatarpatrickvonplaten <patrick.v.platen@gmail.com>
      d8049331
  18. 14 Sep, 2021 1 commit
    • Bhadresh Savani's avatar
      [Flax] Addition of FlaxPegasus (#13420) · c1e47bf4
      Bhadresh Savani authored
      
      
      * added initial files
      
      * fixes pipeline
      
      * fixes style and quality
      
      * fixes doc issue and positional encoding
      
      * fixes layer norm and test
      
      * fixes quality issue
      
      * fixes code quality
      
      * removed extra layer norm
      
      * added layer norm back in encoder and decoder
      
      * added more code copy quality checks
      
      * update tests
      
      * Apply suggestions from code review
      
      * fix import
      
      * fix test
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      c1e47bf4
  19. 10 Sep, 2021 1 commit
    • Nicolas Patry's avatar
      [Large PR] Entire rework of pipelines. (#13308) · c63fcabf
      Nicolas Patry authored
      
      
      * Enabling dataset iteration on pipelines.
      
      Enabling dataset iteration on pipelines.
      
      Unifying parameters under `set_parameters` function.
      
      Small fix.
      
      Last fixes after rebase
      
      Remove print.
      
      Fixing text2text `generate_kwargs`
      
      No more `self.max_length`.
      
      Fixing tf only conversational.
      
      Consistency in start/stop index over TF/PT.
      
      Speeding up drastically on TF (nasty bug where max_length would increase
      a ton.)
      
      Adding test for support for non fast tokenizers.
      
      Fixign GPU usage on zero-shot.
      
      Fix working on Tf.
      
      Update src/transformers/pipelines/base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      Update src/transformers/pipelines/base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      Small cleanup.
      
      Remove all asserts + simple format.
      
      * Fixing audio-classification for large PR.
      
      * Overly explicity null checking.
      
      * Encapsulating GPU/CPU pytorch manipulation directly within `base.py`.
      
      * Removed internal state for parameters of the  pipeline.
      
      Instead of overriding implicitly internal state, we moved
      to real named arguments on every `preprocess`, `_forward`,
      `postprocess` function.
      
      Instead `_sanitize_parameters` will be used to split all kwargs
      of both __init__ and __call__ into the 3 kinds of named parameters.
      
      * Move import warnings.
      
      * Small fixes.
      
      * Quality.
      
      * Another small fix, using the CI to debug faster.
      
      * Last fixes.
      
      * Last fix.
      
      * Small cleanup of tensor moving.
      
      * is not None.
      
      * Adding a bunch of docs + a iteration test.
      
      * Fixing doc style.
      
      * KeyDataset = None guard.
      
      * RRemoving the Cuda test for pipelines (was testing).
      
      * Even more simple iteration test.
      
      * Correct import .
      
      * Long day.
      
      * Fixes in docs.
      
      * [WIP] migrating object detection.
      
      * Fixed the target_size bug.
      
      * Fixup.
      
      * Bad variable name.
      
      * Fixing `ensure_on_device` respects original ModelOutput.
      c63fcabf
  20. 08 Sep, 2021 1 commit
  21. 01 Sep, 2021 2 commits
  22. 31 Aug, 2021 3 commits
  23. 30 Aug, 2021 3 commits
    • Kamal Raj's avatar
      albert flax (#13294) · 98e409ab
      Kamal Raj authored
      * albert flax
      
      * year -> 2021
      
      * docstring updated for flax
      
      * removed head_mask
      
      * removed from_pt
      
      * removed passing attention_mask to embedding layer
      98e409ab
    • Kamal Raj's avatar
      distilbert-flax (#13324) · 774760e6
      Kamal Raj authored
      * distilbert-flax
      
      * added missing self
      
      * docs fix
      
      * removed tied kernal extra init
      
      * updated docs
      
      * x -> hidden states
      
      * removed head_mask
      
      * removed from_pt, +FLAX
      
      * updated year
      774760e6
    • NielsRogge's avatar
      Add LayoutLMv2 + LayoutXLM (#12604) · b6ddb08a
      NielsRogge authored
      
      
      * First commit
      
      * Make style
      
      * Fix dummy objects
      
      * Add Detectron2 config
      
      * Add LayoutLMv2 pooler
      
      * More improvements, add documentation
      
      * More improvements
      
      * Add model tests
      
      * Add clarification regarding image input
      
      * Improve integration test
      
      * Fix bug
      
      * Fix another bug
      
      * Fix another bug
      
      * Fix another bug
      
      * More improvements
      
      * Make more tests pass
      
      * Make more tests pass
      
      * Improve integration test
      
      * Remove gradient checkpointing and add head masking
      
      * Add integration test
      
      * Add LayoutLMv2ForSequenceClassification to the tests
      
      * Add LayoutLMv2ForQuestionAnswering
      
      * More improvements
      
      * More improvements
      
      * Small improvements
      
      * Fix _LazyModule
      
      * Fix fast tokenizer
      
      * Move sync_batch_norm to a separate method
      
      * Replace dummies by requires_backends
      
      * Move calculation of visual bounding boxes to separate method + update README
      
      * Add models to main init
      
      * First draft
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * Remove is_split_into_words
      
      * More improvements
      
      * Simply tesseract - no use of pandas anymore
      
      * Add LayoutLMv2Processor
      
      * Update is_pytesseract_available
      
      * Fix bugs
      
      * Improve feature extractor
      
      * Fix bug
      
      * Add print statement
      
      * Add truncation of bounding boxes
      
      * Add tests for LayoutLMv2FeatureExtractor and LayoutLMv2Tokenizer
      
      * Improve tokenizer tests
      
      * Make more tokenizer tests pass
      
      * Make more tests pass, add integration tests
      
      * Finish integration tests
      
      * More improvements
      
      * More improvements - update API of the tokenizer
      
      * More improvements
      
      * Remove support for VQA training
      
      * Remove some files
      
      * Improve feature extractor
      
      * Improve documentation and one more tokenizer test
      
      * Make quality and small docs improvements
      
      * Add batched tests for LayoutLMv2Processor, remove fast tokenizer
      
      * Add truncation of labels
      
      * Apply suggestions from code review
      
      * Improve processor tests
      
      * Fix failing tests and add suggestion from code review
      
      * Fix tokenizer test
      
      * Add detectron2 CI job
      
      * Simplify CI job
      
      * Comment out non-detectron2 jobs and specify number of processes
      
      * Add pip install torchvision
      
      * Add durations to see which tests are slow
      
      * Fix tokenizer test and make model tests smaller
      
      * Frist draft
      
      * Use setattr
      
      * Possible fix
      
      * Proposal with configuration
      
      * First draft of fast tokenizer
      
      * More improvements
      
      * Enable fast tokenizer tests
      
      * Make more tests pass
      
      * Make more tests pass
      
      * More improvements
      
      * Addd padding to fast tokenizer
      
      * Mkae more tests pass
      
      * Make more tests pass
      
      * Make all tests pass for fast tokenizer
      
      * Make fast tokenizer support overflowing boxes and labels
      
      * Add support for overflowing_labels to slow tokenizer
      
      * Add support for fast tokenizer to the processor
      
      * Update processor tests for both slow and fast tokenizers
      
      * Add head models to model mappings
      
      * Make style & quality
      
      * Remove Detectron2 config file
      
      * Add configurable option to label all subwords
      
      * Fix test
      
      * Skip visual segment embeddings in test
      
      * Use ResNet-18 backbone in tests instead of ResNet-101
      
      * Proposal
      
      * Re-enable all jobs on CI
      
      * Fix installation of tesseract
      
      * Fix failing test
      
      * Fix index table
      
      * Add LayoutXLM doc page, first draft of code examples
      
      * Improve documentation a lot
      
      * Update expected boxes for Tesseract 4.0.0 beta
      
      * Use offsets to create labels instead of checking if they start with ##
      
      * Update expected boxes for Tesseract 4.1.1
      
      * Fix conflict
      
      * Make variable names cleaner, add docstring, add link to notebooks
      
      * Revert "Fix conflict"
      
      This reverts commit a9b46ce9afe47ebfcfe7b45e6a121d49e74ef2c5.
      
      * Revert to make integration test pass
      
      * Apply suggestions from @LysandreJik's review
      
      * Address @patrickvonplaten's comments
      
      * Remove fixtures DocVQA in favor of dataset on the hub
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      b6ddb08a
  24. 24 Aug, 2021 1 commit
  25. 23 Aug, 2021 1 commit
    • Yih-Dar's avatar
      Make Flax GPT2 working with cross attention (#13008) · 2e20c0f3
      Yih-Dar authored
      
      
      * make flax gpt2 working with cross attention
      
      * Remove encoder->decoder projection layer
      
      * A draft (incomplete) for FlaxEncoderDecoderModel
      
      * Add the method from_encoder_decoder_pretrained + the docstrings
      
      * Fix the mistakes of using EncoderDecoderModel
      
      * Fix style
      
      * Add FlaxEncoderDecoderModel to the library
      
      * Fix cyclic imports
      
      * Add FlaxEncoderDecoderModel to modeling_flax_auto.py
      
      * Remove question comments
      
      * add tests for FlaxEncoderDecoderModel
      
      * add flax_encoder_decoder to the lists of ignored entries in check_repo.py
      
      * fix missing required positional arguments
      
      * Remove **kwargs when creating FlaxEncoderDecoderModel in from_encoder_decoder_pretrained()
      
      Also fix generation eos/pad tokens issue
      
      * Fix: Use sequences from the generated_output
      
      * Change a check from assert to raise ValueError
      
      * Fix examples and token ids issues
      
      * Fix missing all_cross_attentions when outputting tuple in modeling_gpt2
      
      * Remove the changes in configuration docstrings.
      
      * allow for bert 2 gpt2
      
      * make fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Change remaining examples to bert2gpt2
      
      * Change the test to Bert2GPT2
      
      * Fix examples
      
      * Fix import
      
      * Fix unpack bug
      
      * Rename to FlaxEncoderDecoderModelTest and change the test to bert2gpt2
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix: NotImplentedError -> NotImplementedError
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * up
      
      * finalize
      Co-authored-by: default avatarydshieh <ydshieh@user.noreply>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2e20c0f3
  26. 17 Aug, 2021 1 commit
  27. 16 Aug, 2021 1 commit
  28. 12 Aug, 2021 1 commit
  29. 04 Aug, 2021 2 commits
    • NielsRogge's avatar
      Add BEiT (#12994) · 83e5a106
      NielsRogge authored
      
      
      * First pass
      
      * Make conversion script work
      
      * Improve conversion script
      
      * Fix bug, conversion script working
      
      * Improve conversion script, implement BEiTFeatureExtractor
      
      * Make conversion script work based on URL
      
      * Improve conversion script
      
      * Add tests, add documentation
      
      * Fix bug in conversion script
      
      * Fix another bug
      
      * Add support for converting masked image modeling model
      
      * Add support for converting masked image modeling
      
      * Fix bug
      
      * Add print statement for debugging
      
      * Fix another bug
      
      * Make conversion script finally work for masked image modeling models
      
      * Move id2label for datasets to JSON files on the hub
      
      * Make sure id's are read in as integers
      
      * Add integration tests
      
      * Make style & quality
      
      * Fix test, add BEiT to README
      
      * Apply suggestions from @sgugger's review
      
      * Apply suggestions from code review
      
      * Make quality
      
      * Replace nielsr by microsoft in tests, add docs
      
      * Rename BEiT to Beit
      
      * Minor fix
      
      * Fix docs of BeitForMaskedImageModeling
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      83e5a106
    • Patrick von Platen's avatar
      [Flax] Correctly Add MT5 (#12988) · a317e6c3
      Patrick von Platen authored
      
      
      * finish PR
      
      * finish mt5
      
      * push
      
      * up
      
      * Update tests/test_modeling_flax_mt5.py
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      a317e6c3