"vscode:/vscode.git/clone" did not exist on "fe6ff4a920d79ae2fc01efd3afe32182a905ec06"
  1. 26 Oct, 2021 1 commit
  2. 18 Oct, 2021 3 commits
  3. 15 Oct, 2021 1 commit
  4. 14 Oct, 2021 1 commit
  5. 13 Oct, 2021 1 commit
    • NielsRogge's avatar
      Add TrOCR + VisionEncoderDecoderModel (#13874) · 408b2d2b
      NielsRogge authored
      * First draft
      
      * Update self-attention of RoBERTa as proposition
      
      * Improve conversion script
      
      * Add TrOCR decoder-only model
      
      * More improvements
      
      * Make forward pass with pretrained weights work
      
      * More improvements
      
      * Some more improvements
      
      * More improvements
      
      * Make conversion work
      
      * Clean up print statements
      
      * Add documentation, processor
      
      * Add test files
      
      * Small improvements
      
      * Some more improvements
      
      * Make fix-copies, improve docs
      
      * Make all vision encoder decoder model tests pass
      
      * Make conversion script support other models
      
      * Update URL for OCR image
      
      * Update conversion script
      
      * Fix style & quality
      
      * Add support for the large-printed model
      
      * Fix some issues
      
      * Add print statement for debugging
      
      * Add print statements for debugging
      
      * Make possible fix for sinusoidal embedding
      
      * Further debugging
      
      * Potential fix v2
      
      * Add more print statements for debugging
      
      * Add more print statements for debugging
      
      * Deubg more
      
      * Comment out print statements
      
      * Make conversion of large printed model possible, address review comments
      
      * Make it possible to convert the stage1 checkpoints
      
      * Clean up code, apply suggestions from code review
      
      * Apply suggestions from code review, use Microsoft models in tests
      
      * Rename encoder_hidden_size to cross_attention_hidden_size
      
      * Improve docs
      408b2d2b
  6. 12 Oct, 2021 1 commit
    • Yih-Dar's avatar
      Add TFEncoderDecoderModel + Add cross-attention to some TF models (#13222) · 8b240a06
      Yih-Dar authored
      
      
      * Add cross attentions to TFGPT2Model
      
      * Add TFEncoderDecoderModel
      
      * Add TFBaseModelOutputWithPoolingAndCrossAttentions
      
      * Add cross attentions to TFBertModel
      
      * Fix past or past_key_values argument issue
      
      * Fix generation
      
      * Fix save and load
      
      * Add some checks and comments
      
      * Clean the code that deals with past keys/values
      
      * Add kwargs to processing_inputs
      
      * Add serving_output to TFEncoderDecoderModel
      
      * Some cleaning + fix use_cache value issue
      
      * Fix tests + add bert2bert/bert2gpt2 tests
      
      * Fix more tests
      
      * Ignore crossattention.bias when loading GPT2 weights into TFGPT2
      
      * Fix return_dict_in_generate in tf generation
      
      * Fix is_token_logit_eos_token bug in tf generation
      
      * Finalize the tests after fixing some bugs
      
      * Fix another is_token_logit_eos_token bug in tf generation
      
      * Add/Update docs
      
      * Add TFBertEncoderDecoderModelTest
      
      * Clean test script
      
      * Add TFEncoderDecoderModel to the library
      
      * Add cross attentions to TFRobertaModel
      
      * Add TFRobertaEncoderDecoderModelTest
      
      * make style
      
      * Change the way of position_ids computation
      
      * bug fix
      
      * Fix copies in tf_albert
      
      * Remove some copied from and apply some fix-copies
      
      * Remove some copied
      
      * Add cross attentions to some other TF models
      
      * Remove encoder_hidden_states from TFLayoutLMModel.call for now
      
      * Make style
      
      * Fix TFRemBertForCausalLM
      
      * Revert the change to longformer + Remove copies
      
      * Revert the change to albert and convbert + Remove copies
      
      * make quality
      
      * make style
      
      * Add TFRembertEncoderDecoderModelTest
      
      * make quality and fix-copies
      
      * test TFRobertaForCausalLM
      
      * Fixes for failed tests
      
      * Fixes for failed tests
      
      * fix more tests
      
      * Fixes for failed tests
      
      * Fix Auto mapping order
      
      * Fix TFRemBertEncoder return value
      
      * fix tf_rembert
      
      * Check copies are OK
      
      * Fix missing TFBaseModelOutputWithPastAndCrossAttentions is not defined
      
      * Add TFEncoderDecoderModelSaveLoadTests
      
      * fix tf weight loading
      
      * check the change of use_cache
      
      * Revert the change
      
      * Add missing test_for_causal_lm for TFRobertaModelTest
      
      * Try cleaning past
      
      * fix _reorder_cache
      
      * Revert some files to original versions
      
      * Keep as many copies as possible
      
      * Apply suggested changes - Use raise ValueError instead of assert
      
      * Move import to top
      
      * Fix wrong require_torch
      
      * Replace more assert by raise ValueError
      
      * Add test_pt_tf_model_equivalence (the test won't pass for now)
      
      * add test for loading/saving
      
      * finish
      
      * finish
      
      * Remove test_pt_tf_model_equivalence
      
      * Update tf modeling template
      
      * Remove pooling, added in the prev. commit, from MainLayer
      
      * Update tf modeling test template
      
      * Move inputs["use_cache"] = False to modeling_tf_utils.py
      
      * Fix torch.Tensor in the comment
      
      * fix use_cache
      
      * Fix missing use_cache in ElectraConfig
      
      * Add a note to from_pretrained
      
      * Fix style
      
      * Change test_encoder_decoder_save_load_from_encoder_decoder_from_pt
      
      * Fix TFMLP (in TFGPT2) activation issue
      
      * Fix None past_key_values value in serving_output
      
      * Don't call get_encoderdecoder_model in TFEncoderDecoderModelTest.test_configuration_tie until we have a TF checkpoint on Hub
      
      * Apply review suggestions - style for cross_attns in serving_output
      
      * Apply review suggestions - change assert + docstrings
      
      * break the error message to respect the char limit
      
      * deprecate the argument past
      
      * fix docstring style
      
      * Update the encoder-decoder rst file
      
      * fix Unknown interpreted text role "method"
      
      * fix typo
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8b240a06
  7. 08 Oct, 2021 2 commits
  8. 04 Oct, 2021 2 commits
    • Sidd Karamcheti's avatar
      Add Mistral GPT-2 Stability Tweaks (#13573) · 3a8de58c
      Sidd Karamcheti authored
      
      
      * Add layer-wise scaling
      
      * Add reorder & upcasting argument
      
      * Add OpenAI GPT-2 weight initialization scheme
      
      * start `layer_idx` count at zero for consistency
      
      * disentangle attn and reordered and upscaled attn function
      
      * rename `scale_attn_by_layer` to `scale_attn_by_layer_id`
      
      * make autocast from amp compatible with pytorch<1.6
      
      * fix docstring
      
      * style fixes
      
      * Add fixes from PR feedback, style tweaks
      
      * Fix doc whitespace
      
      * Reformat
      
      * First pass scale_attn_by_layer_idx and reorder_and_upcast_attn tests
      
      * Rename scale_attn_by_layer_idx, add tip
      
      * Remove extra newline
      
      * add test for weight initialization
      
      * update code format
      
      * add assert check weights are fp32
      
      * remove assert
      
      * Fix incorrect merge
      
      * Fix shape mismatch in baddbmm
      
      * Add generation test for Mistral flags
      Co-authored-by: default avatarleandro <leandro.vonwerra@spoud.io>
      Co-authored-by: default avatarKeshav Santhanam <keshav2@stanford.edu>
      Co-authored-by: default avatarJ38 <jebolton@stanford.edu>
      3a8de58c
    • Yaser Abdelaziz's avatar
      [docs/gpt-j] fix typo (#13851) · 955fd4fe
      Yaser Abdelaziz authored
      955fd4fe
  9. 30 Sep, 2021 1 commit
  10. 29 Sep, 2021 1 commit
  11. 22 Sep, 2021 3 commits
  12. 21 Sep, 2021 2 commits
    • Kamal Raj's avatar
      beit-flax (#13515) · a2dec768
      Kamal Raj authored
      * beit-flax
      
      * updated FLAX_BEIT_MLM_DOCSTRING
      
      * removed bool_masked_pos from classification
      
      * updated Copyright
      
      * code refactoring: x -> embeddings
      
      * updated test: rm from_pt
      
      * Update docs/source/model_doc/beit.rst
      
      * model code dtype updates and
      other changes according to review
      
      * relative_position_bias
      revert back to pytorch design
      a2dec768
    • Patrick von Platen's avatar
      Add Speech AutoModels (#13655) · 48fa42e5
      Patrick von Platen authored
      * upload
      
      * correct
      
      * correct
      
      * correct
      
      * finish
      
      * up
      
      * up
      
      * up again
      48fa42e5
  13. 20 Sep, 2021 3 commits
    • flozi00's avatar
      Fix typo distilbert doc (#13643) · ea921365
      flozi00 authored
      ea921365
    • Ayaka Mikazuki's avatar
      Fix mT5 documentation (#13639) · 04976a32
      Ayaka Mikazuki authored
      * Fix MT5 documentation
      
      The abstract is incomplete
      
      * MT5 -> mT5
      04976a32
    • Gunjan Chhablani's avatar
      Add FNet (#13045) · d8049331
      Gunjan Chhablani authored
      
      
      * Init FNet
      
      * Update config
      
      * Fix config
      
      * Update model classes
      
      * Update tokenizers to use sentencepiece
      
      * Fix errors in model
      
      * Fix defaults in config
      
      * Remove position embedding type completely
      
      * Fix typo and take only real numbers
      
      * Fix type vocab size in configuration
      
      * Add projection layer to embeddings
      
      * Fix position ids bug in embeddings
      
      * Add minor changes
      
      * Add conversion script and remove CausalLM vestiges
      
      * Fix conversion script
      
      * Fix conversion script
      
      * Remove CausalLM Test
      
      * Update checkpoint names to dummy checkpoints
      
      * Add tokenizer mapping
      
      * Fix modeling file and corresponding tests
      
      * Add tokenization test file
      
      * Add PreTraining model test
      
      * Make style and quality
      
      * Make tokenization base tests work
      
      * Update docs
      
      * Add FastTokenizer tests
      
      * Fix fast tokenizer special tokens
      
      * Fix style and quality
      
      * Remove load_tf_weights vestiges
      
      * Add FNet to  main README
      
      * Fix configuration example indentation
      
      * Comment tokenization slow test
      
      * Fix style
      
      * Add changes from review
      
      * Fix style
      
      * Remove bos and eos tokens from tokenizers
      
      * Add tokenizer slow test, TPU transforms, NSP
      
      * Add scipy check
      
      * Add scipy availabilty check to test
      
      * Fix tokenizer and use correct inputs
      
      * Remove remaining TODOs
      
      * Fix tests
      
      * Fix tests
      
      * Comment Fourier Test
      
      * Uncomment Fourier Test
      
      * Change to google checkpoint
      
      * Add changes from review
      
      * Fix activation function
      
      * Fix model integration test
      
      * Add more integration tests
      
      * Add comparison steps to MLM integration test
      
      * Fix style
      
      * Add masked tokenization fix
      
      * Improve mask tokenization fix
      
      * Fix index docs
      
      * Add changes from review
      
      * Fix issue
      
      * Fix failing import in test
      
      * some more fixes
      
      * correct fast tokenizer
      
      * finalize
      
      * make style
      
      * Remove additional tokenization logic
      
      * Set do_lower_case to False
      
      * Allow keeping accents
      
      * Fix tokenization test
      
      * Fix FNet Tokenizer Fast
      
      * fix tests
      
      * make style
      
      * Add tips to FNet docs
      Co-authored-by: default avatarpatrickvonplaten <patrick.v.platen@gmail.com>
      d8049331
  14. 14 Sep, 2021 1 commit
    • Bhadresh Savani's avatar
      [Flax] Addition of FlaxPegasus (#13420) · c1e47bf4
      Bhadresh Savani authored
      
      
      * added initial files
      
      * fixes pipeline
      
      * fixes style and quality
      
      * fixes doc issue and positional encoding
      
      * fixes layer norm and test
      
      * fixes quality issue
      
      * fixes code quality
      
      * removed extra layer norm
      
      * added layer norm back in encoder and decoder
      
      * added more code copy quality checks
      
      * update tests
      
      * Apply suggestions from code review
      
      * fix import
      
      * fix test
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      c1e47bf4
  15. 08 Sep, 2021 1 commit
  16. 07 Sep, 2021 1 commit
  17. 02 Sep, 2021 2 commits
  18. 01 Sep, 2021 3 commits
  19. 31 Aug, 2021 3 commits
  20. 30 Aug, 2021 4 commits
    • Kamal Raj's avatar
      albert flax (#13294) · 98e409ab
      Kamal Raj authored
      * albert flax
      
      * year -> 2021
      
      * docstring updated for flax
      
      * removed head_mask
      
      * removed from_pt
      
      * removed passing attention_mask to embedding layer
      98e409ab
    • Kamal Raj's avatar
      distilbert-flax (#13324) · 774760e6
      Kamal Raj authored
      * distilbert-flax
      
      * added missing self
      
      * docs fix
      
      * removed tied kernal extra init
      
      * updated docs
      
      * x -> hidden states
      
      * removed head_mask
      
      * removed from_pt, +FLAX
      
      * updated year
      774760e6
    • arfy slowy's avatar
      fix: typo spelling grammar (#13212) · 01977466
      arfy slowy authored
      * fix: typo spelling grammar
      
      * fix: make fixup
      01977466
    • NielsRogge's avatar
      Add LayoutLMv2 + LayoutXLM (#12604) · b6ddb08a
      NielsRogge authored
      
      
      * First commit
      
      * Make style
      
      * Fix dummy objects
      
      * Add Detectron2 config
      
      * Add LayoutLMv2 pooler
      
      * More improvements, add documentation
      
      * More improvements
      
      * Add model tests
      
      * Add clarification regarding image input
      
      * Improve integration test
      
      * Fix bug
      
      * Fix another bug
      
      * Fix another bug
      
      * Fix another bug
      
      * More improvements
      
      * Make more tests pass
      
      * Make more tests pass
      
      * Improve integration test
      
      * Remove gradient checkpointing and add head masking
      
      * Add integration test
      
      * Add LayoutLMv2ForSequenceClassification to the tests
      
      * Add LayoutLMv2ForQuestionAnswering
      
      * More improvements
      
      * More improvements
      
      * Small improvements
      
      * Fix _LazyModule
      
      * Fix fast tokenizer
      
      * Move sync_batch_norm to a separate method
      
      * Replace dummies by requires_backends
      
      * Move calculation of visual bounding boxes to separate method + update README
      
      * Add models to main init
      
      * First draft
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * More improvements
      
      * Remove is_split_into_words
      
      * More improvements
      
      * Simply tesseract - no use of pandas anymore
      
      * Add LayoutLMv2Processor
      
      * Update is_pytesseract_available
      
      * Fix bugs
      
      * Improve feature extractor
      
      * Fix bug
      
      * Add print statement
      
      * Add truncation of bounding boxes
      
      * Add tests for LayoutLMv2FeatureExtractor and LayoutLMv2Tokenizer
      
      * Improve tokenizer tests
      
      * Make more tokenizer tests pass
      
      * Make more tests pass, add integration tests
      
      * Finish integration tests
      
      * More improvements
      
      * More improvements - update API of the tokenizer
      
      * More improvements
      
      * Remove support for VQA training
      
      * Remove some files
      
      * Improve feature extractor
      
      * Improve documentation and one more tokenizer test
      
      * Make quality and small docs improvements
      
      * Add batched tests for LayoutLMv2Processor, remove fast tokenizer
      
      * Add truncation of labels
      
      * Apply suggestions from code review
      
      * Improve processor tests
      
      * Fix failing tests and add suggestion from code review
      
      * Fix tokenizer test
      
      * Add detectron2 CI job
      
      * Simplify CI job
      
      * Comment out non-detectron2 jobs and specify number of processes
      
      * Add pip install torchvision
      
      * Add durations to see which tests are slow
      
      * Fix tokenizer test and make model tests smaller
      
      * Frist draft
      
      * Use setattr
      
      * Possible fix
      
      * Proposal with configuration
      
      * First draft of fast tokenizer
      
      * More improvements
      
      * Enable fast tokenizer tests
      
      * Make more tests pass
      
      * Make more tests pass
      
      * More improvements
      
      * Addd padding to fast tokenizer
      
      * Mkae more tests pass
      
      * Make more tests pass
      
      * Make all tests pass for fast tokenizer
      
      * Make fast tokenizer support overflowing boxes and labels
      
      * Add support for overflowing_labels to slow tokenizer
      
      * Add support for fast tokenizer to the processor
      
      * Update processor tests for both slow and fast tokenizers
      
      * Add head models to model mappings
      
      * Make style & quality
      
      * Remove Detectron2 config file
      
      * Add configurable option to label all subwords
      
      * Fix test
      
      * Skip visual segment embeddings in test
      
      * Use ResNet-18 backbone in tests instead of ResNet-101
      
      * Proposal
      
      * Re-enable all jobs on CI
      
      * Fix installation of tesseract
      
      * Fix failing test
      
      * Fix index table
      
      * Add LayoutXLM doc page, first draft of code examples
      
      * Improve documentation a lot
      
      * Update expected boxes for Tesseract 4.0.0 beta
      
      * Use offsets to create labels instead of checking if they start with ##
      
      * Update expected boxes for Tesseract 4.1.1
      
      * Fix conflict
      
      * Make variable names cleaner, add docstring, add link to notebooks
      
      * Revert "Fix conflict"
      
      This reverts commit a9b46ce9afe47ebfcfe7b45e6a121d49e74ef2c5.
      
      * Revert to make integration test pass
      
      * Apply suggestions from @LysandreJik's review
      
      * Address @patrickvonplaten's comments
      
      * Remove fixtures DocVQA in favor of dataset on the hub
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      b6ddb08a
  21. 27 Aug, 2021 1 commit
    • Anton Lozhkov's avatar
      Add Wav2Vec2 & Hubert ForSequenceClassification (#13153) · b6f332ec
      Anton Lozhkov authored
      * Add hubert classifier + tests
      
      * Add hubert classifier + tests
      
      * Dummies for all classification tests
      
      * Wav2Vec2 classifier + ER test
      
      * Fix hubert integration tests
      
      * Add hubert IC
      
      * Pass tests for all classification tasks on Hubert
      
      * Pass all tests + copies
      
      * Move models to the SUPERB org
      b6f332ec
  22. 26 Aug, 2021 1 commit
    • NielsRogge's avatar
      Add DINO conversion script (#13265) · 0759f251
      NielsRogge authored
      * First commit
      
      * Add interpolation of patch embeddings
      
      * Comment out code
      
      * Fix bug
      
      * Fix another bug
      
      * Fix bug
      
      * Fix another bug
      
      * Remove print statements
      
      * Update conversion script
      
      * Use the official vit implementation
      
      * Add support for converting dino_vits8
      
      * Add DINO to docs of ViT
      
      * Remove assertion
      
      * Add interpolation of position encodings
      
      * Fix bug
      
      * Add align_corners
      
      * Add interpolate_pos_encoding option to forward pass of ViTModel
      
      * Improve interpolate_pos_encoding method
      
      * Add docstring
      0759f251
  23. 23 Aug, 2021 1 commit
    • Yih-Dar's avatar
      Make Flax GPT2 working with cross attention (#13008) · 2e20c0f3
      Yih-Dar authored
      
      
      * make flax gpt2 working with cross attention
      
      * Remove encoder->decoder projection layer
      
      * A draft (incomplete) for FlaxEncoderDecoderModel
      
      * Add the method from_encoder_decoder_pretrained + the docstrings
      
      * Fix the mistakes of using EncoderDecoderModel
      
      * Fix style
      
      * Add FlaxEncoderDecoderModel to the library
      
      * Fix cyclic imports
      
      * Add FlaxEncoderDecoderModel to modeling_flax_auto.py
      
      * Remove question comments
      
      * add tests for FlaxEncoderDecoderModel
      
      * add flax_encoder_decoder to the lists of ignored entries in check_repo.py
      
      * fix missing required positional arguments
      
      * Remove **kwargs when creating FlaxEncoderDecoderModel in from_encoder_decoder_pretrained()
      
      Also fix generation eos/pad tokens issue
      
      * Fix: Use sequences from the generated_output
      
      * Change a check from assert to raise ValueError
      
      * Fix examples and token ids issues
      
      * Fix missing all_cross_attentions when outputting tuple in modeling_gpt2
      
      * Remove the changes in configuration docstrings.
      
      * allow for bert 2 gpt2
      
      * make fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Change remaining examples to bert2gpt2
      
      * Change the test to Bert2GPT2
      
      * Fix examples
      
      * Fix import
      
      * Fix unpack bug
      
      * Rename to FlaxEncoderDecoderModelTest and change the test to bert2gpt2
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix: NotImplentedError -> NotImplementedError
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * up
      
      * finalize
      Co-authored-by: default avatarydshieh <ydshieh@user.noreply>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2e20c0f3