1. 21 Dec, 2020 3 commits
  2. 19 Dec, 2020 1 commit
    • sandip's avatar
      Added TF TransfoXL Sequence Classification (#9169) · e0e255be
      sandip authored
      * TF Transfoxl seq classification
      
      * Update test_modeling_tf_transfo_xl.py
      
      Added num_labels to config level
      
      * TF Transfoxl seq classification
      
      * Update test_modeling_tf_transfo_xl.py
      
      Added num_labels to config level
      
      * code refactor
      
      * code refactor
      
      * code refator
      e0e255be
  3. 18 Dec, 2020 2 commits
  4. 17 Dec, 2020 1 commit
  5. 16 Dec, 2020 3 commits
    • Lysandre Debut's avatar
      TableQuestionAnsweringPipeline (#9145) · 1c1a2ffb
      Lysandre Debut authored
      
      
      * AutoModelForTableQuestionAnswering
      
      * TableQuestionAnsweringPipeline
      
      * Apply suggestions from Patrick's code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Sylvain and Patrick comments
      
      * Better PyTorch/TF error message
      
      * Add integration tests
      
      * Argument Handler naming
      Co-authored-by: default avatarpatrickvonplaten <patrick.v.platen@gmail.com>
      
      * Fix docs to appease the documentation gods
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      1c1a2ffb
    • Lysandre Debut's avatar
      AutoModelForTableQuestionAnswering (#9154) · 07384baf
      Lysandre Debut authored
      * AutoModelForTableQuestionAnswering
      
      * Update src/transformers/models/auto/modeling_auto.py
      
      * Style
      07384baf
    • Patrick von Platen's avatar
      [Flax] Align FlaxBertForMaskedLM with BertForMaskedLM, implement from_pretrained, init (#9054) · 640e6fe1
      Patrick von Platen authored
      
      
      * save intermediate
      
      * save intermediate
      
      * save intermediate
      
      * correct flax bert model file
      
      * new module / model naming
      
      * make style
      
      * almost finish BERT
      
      * finish roberta
      
      * make fix-copies
      
      * delete keys file
      
      * last refactor
      
      * fixes in run_mlm_flax.py
      
      * remove pooled from run_mlm_flax.py`
      
      * fix gelu | gelu_new
      
      * remove Module from inits
      
      * splits
      
      * dirty print
      
      * preventing warmup_steps == 0
      
      * smaller splits
      
      * make fix-copies
      
      * dirty print
      
      * dirty print
      
      * initial_evaluation argument
      
      * declaration order fix
      
      * proper model initialization/loading
      
      * proper initialization
      
      * run_mlm_flax improvements: improper model inputs bugfix + automatic dataset splitting + tokenizers parallelism warning + avoiding warmup_steps=0 bug
      
      * removed tokenizers warning hack, fixed model re-initialization
      
      * reverted training_args.py changes
      
      * fix flax from pretrained
      
      * improve test in flax
      
      * apply sylvains tips
      
      * update init
      
      * make 0.3.0 compatible
      
      * revert tevens changes
      
      * revert tevens changes 2
      
      * finalize revert
      
      * fix bug
      
      * add docs
      
      * add pretrained to init
      
      * Update src/transformers/modeling_flax_utils.py
      
      * fix copies
      
      * final improvements
      Co-authored-by: default avatarTevenLeScao <teven.lescao@gmail.com>
      640e6fe1
  6. 15 Dec, 2020 7 commits
    • NielsRogge's avatar
      [WIP] Tapas v4 (tres) (#9117) · 1551e2dc
      NielsRogge authored
      
      
      * First commit: adding all files from tapas_v3
      
      * Fix multiple bugs including soft dependency and new structure of the library
      
      * Improve testing by adding torch_device to inputs and adding dependency on scatter
      
      * Use Python 3 inheritance rather than Python 2
      
      * First draft model cards of base sized models
      
      * Remove model cards as they are already on the hub
      
      * Fix multiple bugs with integration tests
      
      * All model integration tests pass
      
      * Remove print statement
      
      * Add test for convert_logits_to_predictions method of TapasTokenizer
      
      * Incorporate suggestions by Google authors
      
      * Fix remaining tests
      
      * Change position embeddings sizes to 512 instead of 1024
      
      * Comment out positional embedding sizes
      
      * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
      
      * Added more model names
      
      * Fix truncation when no max length is specified
      
      * Disable torchscript test
      
      * Make style & make quality
      
      * Quality
      
      * Address CI needs
      
      * Test the Masked LM model
      
      * Fix the masked LM model
      
      * Truncate when overflowing
      
      * More much needed docs improvements
      
      * Fix some URLs
      
      * Some more docs improvements
      
      * Test PyTorch scatter
      
      * Set to slow + minify
      
      * Calm flake8 down
      
      * First commit: adding all files from tapas_v3
      
      * Fix multiple bugs including soft dependency and new structure of the library
      
      * Improve testing by adding torch_device to inputs and adding dependency on scatter
      
      * Use Python 3 inheritance rather than Python 2
      
      * First draft model cards of base sized models
      
      * Remove model cards as they are already on the hub
      
      * Fix multiple bugs with integration tests
      
      * All model integration tests pass
      
      * Remove print statement
      
      * Add test for convert_logits_to_predictions method of TapasTokenizer
      
      * Incorporate suggestions by Google authors
      
      * Fix remaining tests
      
      * Change position embeddings sizes to 512 instead of 1024
      
      * Comment out positional embedding sizes
      
      * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
      
      * Added more model names
      
      * Fix truncation when no max length is specified
      
      * Disable torchscript test
      
      * Make style & make quality
      
      * Quality
      
      * Address CI needs
      
      * Test the Masked LM model
      
      * Fix the masked LM model
      
      * Truncate when overflowing
      
      * More much needed docs improvements
      
      * Fix some URLs
      
      * Some more docs improvements
      
      * Add add_pooling_layer argument to TapasModel
      
      Fix comments by @sgugger and @patrickvonplaten
      
      * Fix issue in docs + fix style and quality
      
      * Clean up conversion script and add task parameter to TapasConfig
      
      * Revert the task parameter of TapasConfig
      
      Some minor fixes
      
      * Improve conversion script and add test for absolute position embeddings
      
      * Improve conversion script and add test for absolute position embeddings
      
      * Fix bug with reset_position_index_per_cell arg of the conversion cli
      
      * Add notebooks to the examples directory and fix style and quality
      
      * Apply suggestions from code review
      
      * Move from `nielsr/` to `google/` namespace
      
      * Apply Sylvain's comments
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarRogge Niels <niels.rogge@howest.be>
      Co-authored-by: default avatarLysandreJik <lysandre.debut@reseau.eseo.fr>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      1551e2dc
    • Sylvain Gugger's avatar
      Add possibility to switch between APEX and AMP in Trainer (#9137) · ad895af9
      Sylvain Gugger authored
      
      
      * Add possibility to switch between APEX and AMP in Trainer
      
      * Update src/transformers/training_args.py
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      * Address review comments
      
      * Update src/transformers/training_args.py
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      ad895af9
    • Lysandre Debut's avatar
      Add large model config (#9140) · 0b2f46fa
      Lysandre Debut authored
      0b2f46fa
    • Patrick von Platen's avatar
      [TF Bart] Refactor TFBart (#9029) · abc573f5
      Patrick von Platen authored
      * reorder file
      
      * delete unnecesarry function
      
      * make style
      
      * save intermediate
      
      * fix attention masks
      
      * correct tf bart past key values
      
      * solve merge conflict bug
      
      * correct tensor dims
      
      * save intermediate tf
      
      * change attn layer
      
      * fix typo re-order past
      
      * inputs_embeds
      
      * make fix copies
      
      * finish tests
      
      * fix graph mode
      
      * appyl lysandres suggestions
      abc573f5
    • sandip's avatar
      Added TF OpenAi GPT1 Sequence Classification (#9105) · 389aba34
      sandip authored
      
      
      * TF OpenAI GPT Sequence Classification
      
      * Update src/transformers/models/openai/modeling_tf_openai.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      389aba34
    • Julien Plu's avatar
      Fix tf2.4 (#9120) · ef2d4cd4
      Julien Plu authored
      
      
      * Fix tests for TF 2.4
      
      * Remove <2.4 limitation
      
      * Add version condition
      
      * Update tests/test_optimization_tf.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update tests/test_optimization_tf.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update tests/test_optimization_tf.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      ef2d4cd4
    • Lysandre Debut's avatar
      Fix T5 model parallel tes (#9107) · 6ccea048
      Lysandre Debut authored
      k
      6ccea048
  7. 14 Dec, 2020 4 commits
    • Julien Plu's avatar
      Fix T5 and BART for TF (#9063) · df3f4d2a
      Julien Plu authored
      * Fix T5 for graphe compilation+execution
      
      * Fix BART
      
      * Fix import
      
      * Fix naming
      
      * fix attribute name
      
      * Oops
      
      * fix import
      
      * fix tests
      
      * fix tests
      
      * Update test
      
      * Add mising import
      
      * Address Patrick's comments
      
      * Style
      
      * Address Patrick's comment
      df3f4d2a
    • Ahmed Elnaggar's avatar
      Add parallelization support for T5EncoderModel (#9082) · a9c8bff7
      Ahmed Elnaggar authored
      
      
      * add model parallelism to T5EncoderModel
      
      add model parallelism to T5EncoderModel
      
      * remove decoder from T5EncoderModel parallelize
      
      * uodate T5EncoderModel docs
      
      * Extend T5ModelTest for T5EncoderModel
      
      * fix T5Stask using range for get_device_map
      
      * fix style
      Co-authored-by: default avatarAhmed Elnaggar <elnaggar@rostlab.informatik.tu-muenchen.de>
      a9c8bff7
    • Patrick von Platen's avatar
      [RAG, Bart] Align RAG, Bart cache with T5 and other models of transformers (#9098) · fa1ddced
      Patrick von Platen authored
      * fix rag
      
      * fix slow test
      
      * fix past in bart
      fa1ddced
    • Julien Plu's avatar
      Fix embeddings resizing in TF models (#8657) · 51d9c569
      Julien Plu authored
      * Resize the biases in same time than the embeddings
      
      * Trigger CI
      
      * Biases are not reset anymore
      
      * Remove get_output_embeddings + better LM model detection in generation utils
      
      * Apply style
      
      * First test on BERT
      
      * Update docstring + new name
      
      * Apply the new resizing logic to all the models
      
      * fix tests
      
      * Apply style
      
      * Update the template
      
      * Fix naming
      
      * Fix naming
      
      * Apply style
      
      * Apply style
      
      * Remove unused import
      
      * Revert get_output_embeddings
      
      * Trigger CI
      
      * Update num parameters
      
      * Restore get_output_embeddings in TFPretrainedModel and add comments
      
      * Style
      
      * Add decoder resizing
      
      * Style
      
      * Fix tests
      
      * Separate bias and decoder resize
      
      * Fix tests
      
      * Fix tests
      
      * Apply style
      
      * Add bias resizing in MPNet
      
      * Trigger CI
      
      * Apply style
      51d9c569
  8. 11 Dec, 2020 1 commit
  9. 10 Dec, 2020 1 commit
  10. 09 Dec, 2020 4 commits
  11. 08 Dec, 2020 3 commits
    • Sylvain Gugger's avatar
      New squad example (#8992) · 447808c8
      Sylvain Gugger authored
      
      
      * Add new SQUAD example
      
      * Same with a task-specific Trainer
      
      * Address review comment.
      
      * Small fixes
      
      * Initial work for XLNet
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Final clean up and working XLNet script
      
      * Test and debug
      
      * Final working version
      
      * Add new SQUAD example
      
      * Same with a task-specific Trainer
      
      * Address review comment.
      
      * Small fixes
      
      * Initial work for XLNet
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Final clean up and working XLNet script
      
      * Test and debug
      
      * Final working version
      
      * Add tick
      
      * Update README
      
      * Address review comments
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      447808c8
    • guillaume-be's avatar
      Removed unused `encoder_hidden_states` and `encoder_attention_mask` (#8972) · 7809eb82
      guillaume-be authored
      * Removed unused `encoder_hidden_states` and `encoder_attention_mask` from MobileBert
      
      * Removed decoder tests for MobileBert
      
      * Removed now unnecessary import
      7809eb82
    • Julien Plu's avatar
      Optional layers (#8961) · bf7f79cd
      Julien Plu authored
      * Apply on BERT and ALBERT
      
      * Update TF Bart
      
      * Add input processing to TF BART
      
      * Add input processing for TF CTRL
      
      * Add input processing to TF Distilbert
      
      * Add input processing to TF DPR
      
      * Add input processing to TF Electra
      
      * Add deprecated arguments
      
      * Add input processing to TF XLM
      
      * remove unused imports
      
      * Add input processing to TF Funnel
      
      * Add input processing to TF GPT2
      
      * Add input processing to TF Longformer
      
      * Add input processing to TF Lxmert
      
      * Apply style
      
      * Add input processing to TF Mobilebert
      
      * Add input processing to TF GPT
      
      * Add input processing to TF Roberta
      
      * Add input processing to TF T5
      
      * Add input processing to TF TransfoXL
      
      * Apply style
      
      * Rebase on master
      
      * Fix wrong model name
      
      * Fix BART
      
      * Apply style
      
      * Put the deprecated warnings in the input processing function
      
      * Remove the unused imports
      
      * Raise an error when len(kwargs)>0
      
      * test ModelOutput instead of TFBaseModelOutput
      
      * Address Patrick's comments
      
      * Address Patrick's comments
      
      * Add boolean processing for the inputs
      
      * Take into account the optional layers
      
      * Add missing/unexpected weights in the other models
      
      * Apply style
      
      * rename parameters
      
      * Apply style
      
      * Remove useless
      
      * Remove useless
      
      * Remove useless
      
      * Update num parameters
      
      * Fix tests
      
      * Address Patrick's comment
      
      * Remove useless attribute
      bf7f79cd
  12. 07 Dec, 2020 3 commits
  13. 03 Dec, 2020 1 commit
  14. 02 Dec, 2020 3 commits
    • Patrick von Platen's avatar
      [PyTorch] Refactor Resize Token Embeddings (#8880) · 443f67e8
      Patrick von Platen authored
      * fix resize tokens
      
      * correct mobile_bert
      
      * move embedding fix into modeling_utils.py
      
      * refactor
      
      * fix lm head resize
      
      * refactor
      
      * break lines to make sylvain happy
      
      * add news tests
      
      * fix typo
      
      * improve test
      
      * skip bart-like for now
      
      * check if base_model = get(...) is necessary
      
      * clean files
      
      * improve test
      
      * fix tests
      
      * revert style templates
      
      * Update templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/modeling_{{cookiecutter.lowercase_modelname}}.py
      443f67e8
    • Nicolas Patry's avatar
      Warning about too long input for fast tokenizers too (#8799) · a8c3f9aa
      Nicolas Patry authored
      * Warning about too long input for fast tokenizers too
      
      If truncation is not set in tokenizers, but the tokenization is too long
      for the model (`model_max_length`), we used to trigger a warning that
      
      The input would probably fail (which it most likely will).
      
      This PR re-enables the warning for fast tokenizers too and uses common
      code for the trigger to make sure it's consistent across.
      
      * Checking for pair of inputs too.
      
      * Making the function private and adding it's doc.
      
      * Remove formatting ?? in odd place.
      
      * Missed uppercase.
      a8c3f9aa
    • sandip's avatar
      Transfoxl seq classification (#8868) · f6b44e61
      sandip authored
      * Transfoxl sequence classification
      
      * Transfoxl sequence classification
      f6b44e61
  15. 01 Dec, 2020 2 commits
  16. 30 Nov, 2020 1 commit
    • Nicolas Patry's avatar
      NerPipeline (TokenClassification) now outputs offsets of words (#8781) · d8fc26e9
      Nicolas Patry authored
      * NerPipeline (TokenClassification) now outputs offsets of words
      
      - It happens that the offsets are missing, it forces the user to pattern
      match the "word" from his input, which is not always feasible.
      For instance if a sentence contains the same word twice, then there
      is no way to know which is which.
      - This PR proposes to fix that by outputting 2 new keys for this
      pipelines outputs, "start" and "end", which correspond to the string
      offsets of the word. That means that we should always have the
      invariant:
      
      ```python
      input[entity["start"]: entity["end"]] == entity["entity_group"]
                                          # or entity["entity"] if not grouped
      ```
      
      * Fixing doc style
      d8fc26e9