1. 01 Apr, 2022 1 commit
  2. 25 Feb, 2022 1 commit
    • Yih-Dar's avatar
      Fix tf.concatenate + test past_key_values for TF models (#15774) · 8635407b
      Yih-Dar authored
      
      
      * fix wrong method name tf.concatenate
      
      * add tests related to causal LM / decoder
      
      * make style and quality
      
      * clean-up
      
      * Fix TFBertModel's extended_attention_mask when past_key_values is provided
      
      * Fix tests
      
      * fix copies
      
      * More tf.int8 -> tf.int32 in TF test template
      
      * clean-up
      
      * Update TF test template
      
      * revert the previous commit + update the TF test template
      
      * Fix TF template extended_attention_mask when past_key_values is provided
      
      * Fix some styles manually
      
      * clean-up
      
      * Fix ValueError: too many values to unpack in the test
      
      * Fix more: too many values to unpack in the test
      
      * Add a comment for extended_attention_mask when there is past_key_values
      
      * Fix TFElectra extended_attention_mask when past_key_values is provided
      
      * Add tests to other TF models
      
      * Fix for TF Electra test: add prepare_config_and_inputs_for_decoder
      
      * Fix not passing training arg to lm_head in TFRobertaForCausalLM
      
      * Fix tests (with past) for TF Roberta
      
      * add testing for pask_key_values for TFElectra model
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      8635407b
  3. 23 Feb, 2022 1 commit
  4. 12 Oct, 2021 1 commit
    • Yih-Dar's avatar
      Add TFEncoderDecoderModel + Add cross-attention to some TF models (#13222) · 8b240a06
      Yih-Dar authored
      
      
      * Add cross attentions to TFGPT2Model
      
      * Add TFEncoderDecoderModel
      
      * Add TFBaseModelOutputWithPoolingAndCrossAttentions
      
      * Add cross attentions to TFBertModel
      
      * Fix past or past_key_values argument issue
      
      * Fix generation
      
      * Fix save and load
      
      * Add some checks and comments
      
      * Clean the code that deals with past keys/values
      
      * Add kwargs to processing_inputs
      
      * Add serving_output to TFEncoderDecoderModel
      
      * Some cleaning + fix use_cache value issue
      
      * Fix tests + add bert2bert/bert2gpt2 tests
      
      * Fix more tests
      
      * Ignore crossattention.bias when loading GPT2 weights into TFGPT2
      
      * Fix return_dict_in_generate in tf generation
      
      * Fix is_token_logit_eos_token bug in tf generation
      
      * Finalize the tests after fixing some bugs
      
      * Fix another is_token_logit_eos_token bug in tf generation
      
      * Add/Update docs
      
      * Add TFBertEncoderDecoderModelTest
      
      * Clean test script
      
      * Add TFEncoderDecoderModel to the library
      
      * Add cross attentions to TFRobertaModel
      
      * Add TFRobertaEncoderDecoderModelTest
      
      * make style
      
      * Change the way of position_ids computation
      
      * bug fix
      
      * Fix copies in tf_albert
      
      * Remove some copied from and apply some fix-copies
      
      * Remove some copied
      
      * Add cross attentions to some other TF models
      
      * Remove encoder_hidden_states from TFLayoutLMModel.call for now
      
      * Make style
      
      * Fix TFRemBertForCausalLM
      
      * Revert the change to longformer + Remove copies
      
      * Revert the change to albert and convbert + Remove copies
      
      * make quality
      
      * make style
      
      * Add TFRembertEncoderDecoderModelTest
      
      * make quality and fix-copies
      
      * test TFRobertaForCausalLM
      
      * Fixes for failed tests
      
      * Fixes for failed tests
      
      * fix more tests
      
      * Fixes for failed tests
      
      * Fix Auto mapping order
      
      * Fix TFRemBertEncoder return value
      
      * fix tf_rembert
      
      * Check copies are OK
      
      * Fix missing TFBaseModelOutputWithPastAndCrossAttentions is not defined
      
      * Add TFEncoderDecoderModelSaveLoadTests
      
      * fix tf weight loading
      
      * check the change of use_cache
      
      * Revert the change
      
      * Add missing test_for_causal_lm for TFRobertaModelTest
      
      * Try cleaning past
      
      * fix _reorder_cache
      
      * Revert some files to original versions
      
      * Keep as many copies as possible
      
      * Apply suggested changes - Use raise ValueError instead of assert
      
      * Move import to top
      
      * Fix wrong require_torch
      
      * Replace more assert by raise ValueError
      
      * Add test_pt_tf_model_equivalence (the test won't pass for now)
      
      * add test for loading/saving
      
      * finish
      
      * finish
      
      * Remove test_pt_tf_model_equivalence
      
      * Update tf modeling template
      
      * Remove pooling, added in the prev. commit, from MainLayer
      
      * Update tf modeling test template
      
      * Move inputs["use_cache"] = False to modeling_tf_utils.py
      
      * Fix torch.Tensor in the comment
      
      * fix use_cache
      
      * Fix missing use_cache in ElectraConfig
      
      * Add a note to from_pretrained
      
      * Fix style
      
      * Change test_encoder_decoder_save_load_from_encoder_decoder_from_pt
      
      * Fix TFMLP (in TFGPT2) activation issue
      
      * Fix None past_key_values value in serving_output
      
      * Don't call get_encoderdecoder_model in TFEncoderDecoderModelTest.test_configuration_tie until we have a TF checkpoint on Hub
      
      * Apply review suggestions - style for cross_attns in serving_output
      
      * Apply review suggestions - change assert + docstrings
      
      * break the error message to respect the char limit
      
      * deprecate the argument past
      
      * fix docstring style
      
      * Update the encoder-decoder rst file
      
      * fix Unknown interpreted text role "method"
      
      * fix typo
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      8b240a06
  5. 15 Feb, 2021 1 commit
    • Julien Plu's avatar
      Check TF ops for ONNX compliance (#10025) · c8d3fa0d
      Julien Plu authored
      
      
      * Add check-ops script
      
      * Finish to implement check_tf_ops and start the test
      
      * Make the test mandatory only for BERT
      
      * Update tf_ops folder
      
      * Remove useless classes
      
      * Add the ONNX test for GPT2 and BART
      
      * Add a onnxruntime slow test + better opset flexibility
      
      * Fix test + apply style
      
      * fix tests
      
      * Switch min opset from 12 to 10
      
      * Update src/transformers/file_utils.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Fix GPT2
      
      * Remove extra shape_list usage
      
      * Fix GPT2
      
      * Address Morgan's comments
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      c8d3fa0d
  6. 26 Jan, 2021 1 commit
    • Daniel Stancl's avatar
      Add head_mask/decoder_head_mask for TF BART models (#9639) · 1867d9a8
      Daniel Stancl authored
      * Add head_mask/decoder_head_mask for TF BART models
      
      * Add head_mask and decoder_head_mask input arguments for TF BART-based
      models as a TF counterpart to the PR #9569
      
      * Add test_headmasking functionality to tests/test_modeling_tf_common.py
      
      * TODO: Add a test to verify that we can get a gradient back for
      importance score computation
      
      * Remove redundant #TODO note
      
      Remove redundant #TODO note from tests/test_modeling_tf_common.py
      
      * Fix assertions
      
      * Make style
      
      * Fix ...Model input args and adjust one new test
      
      * Add back head_mask and decoder_head_mask to BART-based ...Model
      after the last commit
      
      * Remove head_mask ande decoder_head_mask from input_dict
      in TF test_train_pipeline_custom_model as these two have different
      shape than other input args (Necessary for passing this test)
      
      * Revert adding global_rng in test_modeling_tf_common.py
      1867d9a8
  7. 07 Dec, 2020 1 commit
  8. 17 Nov, 2020 1 commit
    • Sylvain Gugger's avatar
      Reorganize repo (#8580) · c89bdfbe
      Sylvain Gugger authored
      * Put models in subfolders
      
      * Styling
      
      * Fix imports in tests
      
      * More fixes in test imports
      
      * Sneaky hidden imports
      
      * Fix imports in doc files
      
      * More sneaky imports
      
      * Finish fixing tests
      
      * Fix examples
      
      * Fix path for copies
      
      * More fixes for examples
      
      * Fix dummy files
      
      * More fixes for example
      
      * More model import fixes
      
      * Is this why you're unhappy GitHub?
      
      * Fix imports in conver command
      c89bdfbe
  9. 16 Nov, 2020 1 commit
    • Sylvain Gugger's avatar
      Switch `return_dict` to `True` by default. (#8530) · 1073a2bd
      Sylvain Gugger authored
      * Use the CI to identify failing tests
      
      * Remove from all examples and tests
      
      * More default switch
      
      * Fixes
      
      * More test fixes
      
      * More fixes
      
      * Last fixes hopefully
      
      * Use the CI to identify failing tests
      
      * Remove from all examples and tests
      
      * More default switch
      
      * Fixes
      
      * More test fixes
      
      * More fixes
      
      * Last fixes hopefully
      
      * Run on the real suite
      
      * Fix slow tests
      1073a2bd
  10. 18 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies (#7659) · ba8c4d0a
      Thomas Wolf authored
      * splitting fast and slow tokenizers [WIP]
      
      * [WIP] splitting sentencepiece and tokenizers dependencies
      
      * update dummy objects
      
      * add name_or_path to models and tokenizers
      
      * prefix added to file names
      
      * prefix
      
      * styling + quality
      
      * spliting all the tokenizer files - sorting sentencepiece based ones
      
      * update tokenizer version up to 0.9.0
      
      * remove hard dependency on sentencepiece 馃帀
      
      * and removed hard dependency on tokenizers 馃帀
      
      
      
      * update conversion script
      
      * update missing models
      
      * fixing tests
      
      * move test_tokenization_fast to main tokenization tests - fix bugs
      
      * bump up tokenizers
      
      * fix bert_generation
      
      * update ad fix several tokenizers
      
      * keep sentencepiece in deps for now
      
      * fix funnel and deberta tests
      
      * fix fsmt
      
      * fix marian tests
      
      * fix layoutlm
      
      * fix squeezebert and gpt2
      
      * fix T5 tokenization
      
      * fix xlnet tests
      
      * style
      
      * fix mbart
      
      * bump up tokenizers to 0.9.2
      
      * fix model tests
      
      * fix tf models
      
      * fix seq2seq examples
      
      * fix tests without sentencepiece
      
      * fix slow => fast  conversion without sentencepiece
      
      * update auto and bert generation tests
      
      * fix mbart tests
      
      * fix auto and common test without tokenizers
      
      * fix tests without tokenizers
      
      * clean up tests lighten up when tokenizers + sentencepiece are both off
      
      * style quality and tests fixing
      
      * add sentencepiece to doc/examples reqs
      
      * leave sentencepiece on for now
      
      * style quality split hebert and fix pegasus
      
      * WIP Herbert fast
      
      * add sample_text_no_unicode and fix hebert tokenization
      
      * skip FSMT example test for now
      
      * fix style
      
      * fix fsmt in example tests
      
      * update following Lysandre and Sylvain's comments
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      ba8c4d0a
  11. 26 Aug, 2020 1 commit
  12. 24 Aug, 2020 1 commit
  13. 13 Aug, 2020 1 commit
    • Stas Bekman's avatar
      cleanup tf unittests: part 2 (#6260) · e983da0e
      Stas Bekman authored
      * cleanup torch unittests: part 2
      
      * remove trailing comma added by isort, and which breaks flake
      
      * one more comma
      
      * revert odd balls
      
      * part 3: odd cases
      
      * more ["key"] -> .key refactoring
      
      * .numpy() is not needed
      
      * more unncessary .numpy() removed
      
      * more simplification
      e983da0e
  14. 05 Aug, 2020 1 commit
    • Sylvain Gugger's avatar
      Tf model outputs (#6247) · c67d1a02
      Sylvain Gugger authored
      * TF outputs and test on BERT
      
      * Albert to DistilBert
      
      * All remaining TF models except T5
      
      * Documentation
      
      * One file forgotten
      
      * TF outputs and test on BERT
      
      * Albert to DistilBert
      
      * All remaining TF models except T5
      
      * Documentation
      
      * One file forgotten
      
      * Add new models and fix issues
      
      * Quality improvements
      
      * Add T5
      
      * A bit of cleanup
      
      * Fix for slow tests
      
      * Style
      c67d1a02
  15. 01 Jul, 2020 1 commit
  16. 24 Jun, 2020 1 commit
  17. 16 Jun, 2020 1 commit
  18. 02 Jun, 2020 2 commits
  19. 01 May, 2020 1 commit
    • Julien Chaumond's avatar
      [ci] Load pretrained models into the default (long-lived) cache · f54dc3f4
      Julien Chaumond authored
      There's an inconsistency right now where:
      - we load some models into CACHE_DIR
      - and some models in the default cache
      - and often, in both for the same models
      
      When running the RUN_SLOW tests, this takes a lot of disk space, time, and bandwidth.
      
      I'd rather always use the default cache
      f54dc3f4
  20. 17 Apr, 2020 1 commit
  21. 03 Mar, 2020 1 commit
  22. 04 Feb, 2020 1 commit
  23. 06 Jan, 2020 2 commits
  24. 22 Dec, 2019 6 commits
  25. 21 Dec, 2019 2 commits
    • Aymeric Augustin's avatar
      Reformat source code with black. · fa84ae26
      Aymeric Augustin authored
      This is the result of:
      
          $ black --line-length 119 examples templates transformers utils hubconf.py setup.py
      
      There's a lot of fairly long lines in the project. As a consequence, I'm
      picking the longest widely accepted line length, 119 characters.
      
      This is also Thomas' preference, because it allows for explicit variable
      names, to make the code easier to understand.
      fa84ae26
    • Aymeric Augustin's avatar
      Take advantage of the cache when running tests. · b670c266
      Aymeric Augustin authored
      Caching models across test cases and across runs of the test suite makes
      slow tests somewhat more bearable.
      
      Use gettempdir() instead of /tmp in tests. This makes it easier to
      change the location of the cache with semi-standard TMPDIR/TEMP/TMP
      environment variables.
      
      Fix #2222.
      b670c266
  26. 13 Dec, 2019 1 commit
  27. 06 Dec, 2019 1 commit
    • Aymeric Augustin's avatar
      Remove dependency on pytest for running tests (#2055) · 35401fe5
      Aymeric Augustin authored
      * Switch to plain unittest for skipping slow tests.
      
      Add a RUN_SLOW environment variable for running them.
      
      * Switch to plain unittest for PyTorch dependency.
      
      * Switch to plain unittest for TensorFlow dependency.
      
      * Avoid leaking open files in the test suite.
      
      This prevents spurious warnings when running tests.
      
      * Fix unicode warning on Python 2 when running tests.
      
      The warning was:
      
          UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
      
      * Support running PyTorch tests on a GPU.
      
      Reverts 27e015bd.
      
      * Tests no longer require pytest.
      
      * Make tests pass on cuda
      35401fe5
  28. 24 Oct, 2019 1 commit
  29. 26 Sep, 2019 1 commit
  30. 20 Sep, 2019 1 commit
  31. 09 Sep, 2019 1 commit
  32. 08 Sep, 2019 1 commit