1. 08 Feb, 2022 1 commit
    • Joao Gante's avatar
      Add TFSpeech2Text (#15113) · 8406fa6d
      Joao Gante authored
      * Add wrapper classes
      
      * convert inner layers to tf
      
      * Add TF Encoder and Decoder layers
      
      * TFSpeech2Text models
      
      * Loadable model
      
      * TF model with same outputs as PT model
      
      * test skeleton
      
      * correct tests and run the fixup
      
      * correct attention expansion
      
      * TFSpeech2Text pask_key_values with TF format
      8406fa6d
  2. 01 Feb, 2022 1 commit
  3. 19 Jan, 2022 1 commit
    • Matt's avatar
      Rename compute_loss in TF models (#15207) · 2708bfa1
      Matt authored
      * Rename compute_loss to hf_compute_loss to avoid conflicts with the new Keras method
      
      * make style
      
      * Adding deprecation warning to `compute_loss`
      
      * Fix sneaky reference to compute_loss
      
      * Replace logger.warning with warnings.warn
      
      * Clarifying warning and deprecation timeline
      2708bfa1
  4. 18 Jan, 2022 2 commits
  5. 14 Jan, 2022 2 commits
  6. 23 Dec, 2021 1 commit
    • Yih-Dar's avatar
      Add TFCLIPModel (#13967) · 8f2cc1c3
      Yih-Dar authored
      
      
      * Start the work for TFCLIPModel
      
      * Convert to TF code (TODO: loss + doc)
      
      * Clean up
      
      * Fix pooled_output for TFCLIPTextTransformer - using tf.gather_nd
      
      * assert -> raise error
      
      * Expose TFCLIPModel
      
      * Deal with dummy_inputs
      
      * Add tests
      
      * Fix all tests. TODO: manual check weight loading + add more comments
      
      * Fix pt tf equivalence test
      
      * fixes
      
      * update TFCLIPVisionEmbeddings's Conv2D
      
      * Fix loss + overwrite test_pt_tf_model_equivalence from common
      
      * Add a comment about the change about MainLayer in test_keras_save_load
      
      * Set return_loss=True in TFCLIPModelTester + make tests pass
      
      * overwrite test_pt_tf_model_equivalence from tf common
      
      * fix base_model_prefix
      
      * Fix examples
      
      * remove unused
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * apply review suggestions
      
      * change self.pre_layrnorm to self.pre_layernorm
      
      * apply more review suggestions
      
      * return attention probs before dropout (to align with PT)
      
      * fix weight init
      
      * fix
      
      * build doc
      
      * fix missing doc
      
      * fix for test
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      8f2cc1c3
  7. 20 Dec, 2021 1 commit
  8. 15 Dec, 2021 1 commit
    • Matt's avatar
      TF model cards (#14720) · 48d48276
      Matt authored
      * Initial commit for Keras model cards
      
      * Revert accidental change
      
      * make style
      
      * make style
      
      * make style
      
      * Fix PR comments
      
      * Move repo creation to __init__
      
      * Fixes to README.md creation
      
      * Partial progress for proper card creation on `push_to_hub`
      
      * Proper card creation from `push_to_hub` plus fixes for malformed model cards
      
      * Fixes for model card creation outside the callback
      
      * Adding a model card creation test
      
      * Putting the model card creation test in the right file.
      Good job, Matt.
      
      * make style
      
      * Fix model card test temp dir usage
      
      * Fix model card creation when no optimizer present
      
      * Fixes for when training history not present
      
      * Fix accidental edit to test_modeling_common
      48d48276
  9. 17 Nov, 2021 1 commit
    • N's avatar
      [WIP] Ensure TF model configs can be converted to proper JSON (#14415) · 1991da07
      N authored
      
      
      * test: make sure model configs are jsonifiable
      
      * fix: return python dict instead of config object
      
      * fix: accept pretrained config and use correct class
      
      * Re-enabling slow tests and applying them to core models only
      
      * Re-enabling slow tests and applying them to core models only
      
      * Add new test file to fetcher
      
      * Remove tooslow tests from test_modeling_tf_common.py
      
      * make style
      
      * Style fixes
      
      * Style fixes
      
      * Style fixes
      
      * Style fixes
      
      * Adding core tests to GPT2 and BART
      
      * Removing unused imports
      Co-authored-by: default avatarniklas.fruehauf <niklas.fruehauf@sovanta.com>
      Co-authored-by: default avatarmatt <rocketknight1@gmail.com>
      1991da07
  10. 11 Nov, 2021 1 commit
  11. 09 Nov, 2021 1 commit
    • Yih-Dar's avatar
      Add TFViTModel (#13778) · be4a6c64
      Yih-Dar authored
      
      
      * Start the work for TFViTModel
      
      * Convert to TF code - need to check in the follow up commits
      
      * Clean up model code
      
      * Expose TFViTModel
      
      * make style
      
      * make quality
      
      * Add test
      
      * make style & quality
      
      * Fix some imports
      
      * fix wrong usage - *kwargs => ** kwargs
      
      * Fix Conv2D weight loading (PT->TF) issue
      
      * Add tests for images with different sizes + fix model
      
      * Fix some common tests for TFViTModel
      
      * Use inputs instead of input_ids in test_compile_tf_model
      
      * Add a comment about transpose and Conv2D in convert_tf_weight_name_to_pt_weight_name
      
      * Avoid transpose in TFViT call
      
      * Fix Conv2D issue in load_tf2_weights_in_pytorch_model
      
      * Use tf.keras.layers.Conv2D instead of tf.nn.conv2d
      
      * Using simpler heuristic to detect Conv2D layer
      
      * Change convert_tf_weight_name_to_pt_weight_name to return TransposeType
      
      * Check tf_weight_shape is not None before using it
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix missing comma
      
      * fix input dtype
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      be4a6c64
  12. 02 Nov, 2021 1 commit
  13. 25 Oct, 2021 1 commit
  14. 21 Oct, 2021 1 commit
  15. 12 Oct, 2021 1 commit
    • Yih-Dar's avatar
      Add TFEncoderDecoderModel + Add cross-attention to some TF models (#13222) · 8b240a06
      Yih-Dar authored
      
      
      * Add cross attentions to TFGPT2Model
      
      * Add TFEncoderDecoderModel
      
      * Add TFBaseModelOutputWithPoolingAndCrossAttentions
      
      * Add cross attentions to TFBertModel
      
      * Fix past or past_key_values argument issue
      
      * Fix generation
      
      * Fix save and load
      
      * Add some checks and comments
      
      * Clean the code that deals with past keys/values
      
      * Add kwargs to processing_inputs
      
      * Add serving_output to TFEncoderDecoderModel
      
      * Some cleaning + fix use_cache value issue
      
      * Fix tests + add bert2bert/bert2gpt2 tests
      
      * Fix more tests
      
      * Ignore crossattention.bias when loading GPT2 weights into TFGPT2
      
      * Fix return_dict_in_generate in tf generation
      
      * Fix is_token_logit_eos_token bug in tf generation
      
      * Finalize the tests after fixing some bugs
      
      * Fix another is_token_logit_eos_token bug in tf generation
      
      * Add/Update docs
      
      * Add TFBertEncoderDecoderModelTest
      
      * Clean test script
      
      * Add TFEncoderDecoderModel to the library
      
      * Add cross attentions to TFRobertaModel
      
      * Add TFRobertaEncoderDecoderModelTest
      
      * make style
      
      * Change the way of position_ids computation
      
      * bug fix
      
      * Fix copies in tf_albert
      
      * Remove some copied from and apply some fix-copies
      
      * Remove some copied
      
      * Add cross attentions to some other TF models
      
      * Remove encoder_hidden_states from TFLayoutLMModel.call for now
      
      * Make style
      
      * Fix TFRemBertForCausalLM
      
      * Revert the change to longformer + Remove copies
      
      * Revert the change to albert and convbert + Remove copies
      
      * make quality
      
      * make style
      
      * Add TFRembertEncoderDecoderModelTest
      
      * make quality and fix-copies
      
      * test TFRobertaForCausalLM
      
      * Fixes for failed tests
      
      * Fixes for failed tests
      
      * fix more tests
      
      * Fixes for failed tests
      
      * Fix Auto mapping order
      
      * Fix TFRemBertEncoder return value
      
      * fix tf_rembert
      
      * Check copies are OK
      
      * Fix missing TFBaseModelOutputWithPastAndCrossAttentions is not defined
      
      * Add TFEncoderDecoderModelSaveLoadTests
      
      * fix tf weight loading
      
      * check the change of use_cache
      
      * Revert the change
      
      * Add missing test_for_causal_lm for TFRobertaModelTest
      
      * Try cleaning past
      
      * fix _reorder_cache
      
      * Revert some files to original versions
      
      * Keep as many copies as possible
      
      * Apply suggested changes - Use raise ValueError instead of assert
      
      * Move import to top
      
      * Fix wrong require_torch
      
      * Replace more assert by raise ValueError
      
      * Add test_pt_tf_model_equivalence (the test won't pass for now)
      
      * add test for loading/saving
      
      * finish
      
      * finish
      
      * Remove test_pt_tf_model_equivalence
      
      * Update tf modeling template
      
      * Remove pooling, added in the prev. commit, from MainLayer
      
      * Update tf modeling test template
      
      * Move inputs["use_cache"] = False to modeling_tf_utils.py
      
      * Fix torch.Tensor in the comment
      
      * fix use_cache
      
      * Fix missing use_cache in ElectraConfig
      
      * Add a note to from_pretrained
      
      * Fix style
      
      * Change test_encoder_decoder_save_load_from_encoder_decoder_from_pt
      
      * Fix TFMLP (in TFGPT2) activation issue
      
      * Fix None past_key_values value in serving_output
      
      * Don't call get_encoderdecoder_model in TFEncoderDecoderModelTest.test_configuration_tie until we have a TF checkpoint on Hub
      
      * Apply review suggestions - style for cross_attns in serving_output
      
      * Apply review suggestions - change assert + docstrings
      
      * break the error message to respect the char limit
      
      * deprecate the argument past
      
      * fix docstring style
      
      * Update the encoder-decoder rst file
      
      * fix Unknown interpreted text role "method"
      
      * fix typo
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8b240a06
  16. 13 Jul, 2021 1 commit
  17. 08 Jul, 2021 1 commit
    • Funtowicz Morgan's avatar
      [RFC] Laying down building stone for more flexible ONNX export capabilities (#11786) · 2aa3cd93
      Funtowicz Morgan authored
      
      
      * Laying down building stone for more flexible ONNX export capabilities
      
      * Ability to provide a map of config key to override before exporting.
      
      * Makes it possible to export BART with/without past keys.
      
      * Supports simple mathematical syntax for OnnxVariable.repeated
      
      * Effectively apply value override from onnx config for model
      
      * Supports export with additional features such as with-past for seq2seq
      
      * Store the output path directly in the args for uniform usage across.
      
      * Make BART_ONNX_CONFIG_* constants and fix imports.
      
      * Support BERT model.
      
      * Use tokenizer for more flexibility in defining the inputs of a model.
      
      * Add TODO as remainder to provide the batch/sequence_length as CLI args
      
      * Enable optimizations to be done on the model.
      
      * Enable GPT2 + past
      
      * Improve model validation with outputs containing nested structures
      
      * Enable Roberta
      
      * Enable Albert
      
      * Albert requires opset >= 12
      
      * BERT-like models requires opset >= 12
      
      * Remove double printing.
      
      * Enable XLM-Roberta
      
      * Enable DistilBERT
      
      * Disable optimization by default
      
      * Fix missing setattr when applying optimizer_features
      
      * Add value field to OnnxVariable to define constant input (not from tokenizers)
      
      * Add T5 support.
      
      * Simplify model type retrieval
      
      * Example exporting token_classification pipeline for DistilBERT.
      
      * Refactoring to package `transformers.onnx`
      
      * Solve circular dependency & __main__
      
      * Remove unnecessary imports in `__init__`
      
      * Licences
      
      * Use @Narsil's suggestion to forward the model's configuration to the ONNXConfig to avoid interpolation.
      
      * Onnx export v2 fixes (#12388)
      
      * Tiny fixes
      Remove `convert_pytorch` from onnxruntime-less runtimes
      Correct reference to model
      
      * Style
      
      * Fix Copied from
      
      * LongFormer ONNX config.
      
      * Removed optimizations
      
      * Remvoe bad merge relicas.
      
      * Remove unused constants.
      
      * Remove some deleted constants from imports.
      
      * Fix unittest to remove usage of PyTorch model for onnx.utils.
      
      * Fix distilbert export
      
      * Enable ONNX export test for supported model.
      
      * Style.
      
      * Fix lint.
      
      * Enable all supported default models.
      
      * GPT2 only has one output
      
      * Fix bad property name when overriding config.
      
      * Added unittests and docstrings.
      
      * Disable with_past tests for now.
      
      * Enable outputs validation for default export.
      
      * Remove graph opt lvls.
      
      * Last commit with on-going past commented.
      
      * Style.
      
      * Disabled `with_past` for now
      
      * Remove unused imports.
      
      * Remove framework argument
      
      * Remove TFPreTrainedModel reference
      
      * Add documentation
      
      * Add onnxruntime tests to CircleCI
      
      * Add test
      
      * Rename `convert_pytorch` to `export`
      
      * Use OrderedDict for dummy inputs
      
      * WIP Wav2Vec2
      
      * Revert "WIP Wav2Vec2"
      
      This reverts commit f665efb04c92525c3530e589029f0ae7afdf603e.
      
      * Style
      
      * Use OrderedDict for I/O
      
      * Style.
      
      * Specify OrderedDict documentation.
      
      * Style :)
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2aa3cd93
  18. 23 Jun, 2021 2 commits
    • Sylvain Gugger's avatar
      Clean push to hub API (#12187) · 53c60bab
      Sylvain Gugger authored
      
      
      * Clean push to hub API
      
      * Create working dir if it does not exist
      
      * Different tweak
      
      * New API + all models + test Flax
      
      * Adds the Trainer clean up
      
      * Update src/transformers/file_utils.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Address review comments
      
      * (nit) output types
      
      * No need to set clone_from when folder exists
      
      * Update src/transformers/trainer.py
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      
      * Add generated_from_trainer tag
      
      * Update to new version
      
      * Fixes
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      53c60bab
    • Daniel Stancl's avatar
      Add output in a dictionary for TF `generate` method (#12139) · 26a2e365
      Daniel Stancl authored
      * Add output args to greedy search
      
      * Fix critical typo + make style quality
      
      * Handle generate_beam_search
      
      * Add dict_specific tests and fix the placement of encoder outputs
      
      * Add  specific outputs
      
      * Update doc
      
      * Fix typo
      
      * Adjust handling encoder_outputs + Fix generating for T5
      
      * Fix generate for RAG
      
      * Fix handling ouptut_attentions when target_mapping is not None
      
      Take care of situations when target_mapping is provided
      as there are 2-tuple of attentions
      
      Change from:
      if inputs["output_attentions"]:
          attentions = tuple(tf.transpose(t, perm(2, 3, 0, 1)) for t in attentions)
      
      to:
      if inputs["output_attentions"]:
          if inputs["target_mapping"] is not None:
              # when target_mapping is provided, there are 2-tuple of attentions
               attentions = tuple(
                   tuple(tf.transpose(attn_stream, perm=(2, 3, 0, 1)) for attn_stream in t) for t in attentions
              )
          else:
              attentions = tuple(tf.transpose(t, perm=(2, 3, 0, 1)) for t in attentions)
      
      * Rename kwargs to model_kwargs
      
      * make style quality
      
      * Move imports in test_modeling_tf_common.py
      
      Move ModelOutput-related imports in test_modeling_tf_common.py
      into the `is_tf_available():` statement.
      
      * Rewrite nested if-statements
      
      * Fix added tests
      26a2e365
  19. 14 Jun, 2021 1 commit
    • Will Rice's avatar
      Adding TFWav2Vec2Model (#11617) · d438eee0
      Will Rice authored
      
      
      * [WIP] Add TFWav2Vec2Model
      
      Work in progress for adding a tensorflow version of Wav2Vec2
      
      * feedback changes
      
      * small fix
      
      * Test Feedback Round 1
      
      * Add SpecAugment and CTC Loss
      
      * correct spec augment mask creation
      
      * docstring and correct copyright
      
      * correct bugs
      
      * remove bogus file
      
      * finish tests correction
      
      * del unnecessary layers
      
      * Update src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * make style
      
      * correct final bug
      
      * Feedback Changes
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      d438eee0
  20. 26 May, 2021 1 commit
    • Daniel Stancl's avatar
      Fix usage of head masks by TF encoder-decoder models' `generate()` function (#11775) · 0b933584
      Daniel Stancl authored
      * Fix Bart
      
      * Fix Blenderbot{,_small}
      
      * Fix LED
      
      * Fix Marian
      
      * Fix MBart
      
      * Fix Pegasus
      
      * Fix T5
      
      * Add test for generation with head_mask
      
      * Add a common TF test
      
      * Override a test for the LED model as head masking is not yet properly implemented
      
      * Remove all head_masks from input preparation for LED
      
      * Drop masking for T5 as it needs a bit of refactor
      0b933584
  21. 26 Apr, 2021 2 commits
  22. 23 Apr, 2021 1 commit
  23. 08 Apr, 2021 1 commit
  24. 15 Mar, 2021 1 commit
  25. 09 Mar, 2021 1 commit
    • Lysandre Debut's avatar
      Speedup tf tests (#10601) · 546cbe7e
      Lysandre Debut authored
      * Pipeline tests should be slow
      
      * Temporarily mark some tests as slow
      
      * Temporarily mark Barthez tests as slow
      546cbe7e
  26. 18 Feb, 2021 1 commit
  27. 15 Feb, 2021 1 commit
    • Julien Plu's avatar
      Check TF ops for ONNX compliance (#10025) · c8d3fa0d
      Julien Plu authored
      
      
      * Add check-ops script
      
      * Finish to implement check_tf_ops and start the test
      
      * Make the test mandatory only for BERT
      
      * Update tf_ops folder
      
      * Remove useless classes
      
      * Add the ONNX test for GPT2 and BART
      
      * Add a onnxruntime slow test + better opset flexibility
      
      * Fix test + apply style
      
      * fix tests
      
      * Switch min opset from 12 to 10
      
      * Update src/transformers/file_utils.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Fix GPT2
      
      * Remove extra shape_list usage
      
      * Fix GPT2
      
      * Address Morgan's comments
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      c8d3fa0d
  28. 08 Feb, 2021 1 commit
    • Julien Plu's avatar
      Restore TF embeddings and attention layers to their previous version (#9890) · 31563e05
      Julien Plu authored
      * Refacto BERT
      
      * Restore all the concerned models
      
      * Remove print
      
      * Update template
      
      * Apply Sylvain's and Morgan's comments
      
      * Fix cast
      
      * Put the cast inside call
      
      * Remove cond in ebds
      
      * Fix funnel
      
      * Restore previous dot product (attention_scores) computation
      
      * Add ConvBERT and BART
      
      * Make all the S2S models ONNX compliant
      
      * Fix test
      
      * Fix check copies
      31563e05
  29. 03 Feb, 2021 1 commit
  30. 29 Jan, 2021 1 commit
  31. 28 Jan, 2021 1 commit
    • Daniel Stancl's avatar
      Remove redundant `test_head_masking = True` flags in test files (#9858) · 4c3ae89a
      Daniel Stancl authored
      * Remove redundant test_head_masking = True flags
      
      * Remove all redundant test_head_masking flags in PyTorch test_modeling_* files
      
      * Make test_head_masking = True as a default choice in test_modeling_tf_commong.py
      
      * Remove all redundant test_head_masking flags in TensorFlow
      test_modeling_tf_* files
      
      * Put back test_head_masking=False fot TFT5 models
      4c3ae89a
  32. 27 Jan, 2021 1 commit
  33. 26 Jan, 2021 1 commit
    • Daniel Stancl's avatar
      Add head_mask/decoder_head_mask for TF BART models (#9639) · 1867d9a8
      Daniel Stancl authored
      * Add head_mask/decoder_head_mask for TF BART models
      
      * Add head_mask and decoder_head_mask input arguments for TF BART-based
      models as a TF counterpart to the PR #9569
      
      * Add test_headmasking functionality to tests/test_modeling_tf_common.py
      
      * TODO: Add a test to verify that we can get a gradient back for
      importance score computation
      
      * Remove redundant #TODO note
      
      Remove redundant #TODO note from tests/test_modeling_tf_common.py
      
      * Fix assertions
      
      * Make style
      
      * Fix ...Model input args and adjust one new test
      
      * Add back head_mask and decoder_head_mask to BART-based ...Model
      after the last commit
      
      * Remove head_mask ande decoder_head_mask from input_dict
      in TF test_train_pipeline_custom_model as these two have different
      shape than other input args (Necessary for passing this test)
      
      * Revert adding global_rng in test_modeling_tf_common.py
      1867d9a8
  34. 22 Jan, 2021 2 commits
  35. 21 Jan, 2021 1 commit
    • Julien Plu's avatar
      Fix TF s2s models (#9478) · a7dabfb3
      Julien Plu authored
      * Fix Seq2Seq models for serving
      
      * Apply style
      
      * Fix lonfgormer
      
      * Fix mBart/Pegasus/Blenderbot
      
      * Apply style
      
      * Add a main intermediate layer
      
      * Apply style
      
      * Remove import
      
      * Apply tf.function to Longformer
      
      * Fix utils check_copy
      
      * Update S2S template
      
      * Fix BART + Blenderbot
      
      * Fix BlenderbotSmall
      
      * Fix BlenderbotSmall
      
      * Fix BlenderbotSmall
      
      * Fix MBart
      
      * Fix Marian
      
      * Fix Pegasus + template
      
      * Apply style
      
      * Fix common attributes test
      
      * Forgot to fix the LED test
      
      * Apply Patrick's comment on LED Decoder
      a7dabfb3