1. 27 Jan, 2022 1 commit
    • SaulLu's avatar
      improve saving strategy of sentencepiece tokenizer (#15328) · ade7371a
      SaulLu authored
      
      
      * add new test
      
      * add a feature to same the sentencepiece tokenizer model when the init file was deleted
      
      * update marian
      
      * update m2m_100
      
      * fix marian
      
      * update speech to text
      
      * override test for layoutxlm
      
      * fix saving bartpho
      
      * remove harcoded values bartpho
      
      * special token string version
      
      * finish bartpho
      
      * override layoutxml test
      
      * add mbart
      
      * move special tokens list
      
      * format
      
      * Revert "format"
      
      This reverts commit 37a40df37903a932c2f951cbd33acb684246bae7.
      
      * simplify list of string of special tokens
      
      * Re-write `self.fairseq_tokens_to_ids ` initialization logic with special tokens
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      ade7371a
  2. 06 Jan, 2022 1 commit
  3. 03 Jan, 2022 1 commit
  4. 30 Dec, 2021 1 commit
  5. 03 Dec, 2021 1 commit
    • Li-Huai (Allan) Lin's avatar
      Improve tokenizer tests (#13594) · 66ea7391
      Li-Huai (Allan) Lin authored
      * Use new method to acquire tokenizers
      
      * Resolve TODOs.
      
      * Style
      
      * Fix
      
      * Enable do_lower_case in test_tokenize_special_tokens
      
      * Apply suggestion from code review
      
      * Fix mask token handling
      
      * Revert "Fix mask token handling"
      
      This reverts commit daaa3f5291b1f71e5bc3604ca281c000000c4648.
      
      * Fix FNet mask token tokenization
      
      * Complete everything
      
      * Apply suggestions from code review
      66ea7391
  6. 10 Nov, 2021 1 commit
  7. 08 Nov, 2021 1 commit
  8. 02 Nov, 2021 1 commit
  9. 11 Oct, 2021 1 commit
  10. 08 Oct, 2021 1 commit
  11. 05 Oct, 2021 1 commit
  12. 17 Sep, 2021 1 commit
  13. 09 Sep, 2021 1 commit
  14. 02 Sep, 2021 1 commit
    • Apoorv Garg's avatar
      Correct order of overflowing_tokens for slow tokenizer (#13179) · b91e65af
      Apoorv Garg authored
      * correct order of overflowing_tokens for slow tokenizer (issue fix #13148)
      
      * python 3.9 requires sentencepiece version 0.1.94 or above
      
      * slicing of ids fixed in truncated_sequence()
      
      * Update setup.py
      
      * Correct order of overflowing tokens for pair of sentences
      
      * code reformatted
      
      * Update tokenization_utils_base.py
      
      * reformatting file
      
      * test to check single_input added
      
      * missing function restored
      
      * test to check pair_input overflowing tokens order
      
      * test to check pair_input overflowing tokens order
      
      * test to check pair_input overflowing tokens order
      
      * added an error message for pair of seq and longest_first strategy
      
      * test for pair_input modified
      
      * variable name corrected
      
      * fixed a typo in error message
      
      * requested changes implemented
      
      * required test added
      
      * Corrected the message to match test message
      
      * added error message for Luke Tokenizer
      
      * lost test recovered
      
      * docstring for truncate_sequences and prepare_for_model updated
      
      * docstring for luke tokenizer updated
      
      * updated ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING
      
      * aligned text and fixed puncuatations
      
      * improved style and quality of code
      
      * fixed error_msg in truncate_sequences
      
      * replaced encode_plus method with regular call method
      
      * clean up
      
      * rephrased the docstring
      b91e65af
  15. 01 Sep, 2021 1 commit
  16. 23 Aug, 2021 1 commit
    • SaulLu's avatar
      Change how "additional_special_tokens" argument in the ".from_pretrained"... · 7223844d
      SaulLu authored
      Change how "additional_special_tokens" argument in the ".from_pretrained" method of the tokenizer is taken into account (#13056)
      
      * add test
      
      * add change in PretrainedTokenizerBase
      
      * change Luke
      
      * deactivate
      
      * add the possibility to add additional special tokens for M2M100
      
      * format
      
      * add special test for canine
      
      * proposed changes for mbart
      
      * proposed changes for mbart50
      
      * proposed changes for byt5
      
      * proposed changes for canine
      
      * proposed changes for t5
      
      * test fast and slow
      
      * remove comment
      
      * remove comment
      
      * add fast version for all tests
      
      * replace break by continue
      
      * add more comments
      
      * add check to avoid duplicates
      
      * remove comment
      
      * format
      
      * proposed change for wave2vec2
      
      * reverse changes mbart
      
      * uncomment
      
      * format
      7223844d
  17. 17 Jul, 2021 1 commit
  18. 16 Jul, 2021 1 commit
  19. 01 Jul, 2021 1 commit
  20. 29 Jun, 2021 1 commit
  21. 23 Jun, 2021 1 commit
  22. 14 Jun, 2021 1 commit
  23. 07 Jun, 2021 1 commit
  24. 01 Jun, 2021 1 commit
    • Philip May's avatar
      Add regression tests for slow sentencepiece tokenizers. (#11737) · fcad8018
      Philip May authored
      * add test_vocab_size for sentencepiece tok.
      
      * add test_get_vocab for sentencepiece tok.
      
      * add test_convert_token_and_id for sentencepiece tok.
      
      * add test_tokenize_and_convert_tokens_to_string for all tok.
      
      * improve test_tokenize_and_convert_tokens_to_string for sp. tok.
      
      * add common tokenizer integration tests
      - for albert
      - for barthez
      
      * add tokenizer integration tests to bert gen.
      
      * add most tokenizer integration tests
      
      * fix camembert tokenizer integration test
      
      * add tokenizer integration test to marian
      
      * add tokenizer integration test to reformer
      
      * add typing and doc to tokenizer_integration_test_util
      
      * fix tokenizer integration test of reformer
      
      * improve test_sentencepiece_tokenize_and_convert_tokens_to_string
      
      * empty commit to trigger CI
      
      * fix tokenizer integration test of reformer
      
      * remove code not needed anymore
      
      * empty commit to trigger CI
      
      * empty commit to trigger CI
      fcad8018
  25. 13 May, 2021 1 commit
    • Philip May's avatar
      Enable option for subword regularization in more tokenizers. (#11417) · 37ed3ab7
      Philip May authored
      * improve slow class tok usage at xlm rob
      
      * add subword regularization for barthez
      
      * improve barthez tok. test
      
      * fix tokenizer tests
      
      * add subword regularization for camembert
      
      * add subword regularization for deberta v2 tokenizer
      
      * add more doc to deberta v2 tokenizer
      
      * add subword regularization for speech to text tok.
      
      * fix sp_model_kwargs type in speech 2 text tok.
      
      * add subword regularization for M2M100 tok.
      
      * add more concrete type hints
      
      * fix tests for m2m100 and s2t tok.
      
      * add missing Any import
      
      * fix syntax error in m2m100 tok.
      
      * fix unpickle of m2m100 and s2t tok.
      
      * fix test of m2m100 and s2t tok.
      
      * improve unpickle of deberta v2 tok.
      
      * add test for pickle of barthez & camembert
      
      * fix pickle of barthez & camembert
      
      * add test for deberta v2 tok. pickle
      
      * fix m2m100 tok. pickle
      
      * fix s2t tok. pickle
      
      * add subword regularization to albert tok.
      
      * refactor subword reg. test into TokenizerTesterMixin
      
      improve albert tok. test
      
      remove sample argument form albert tok.
      
      check subword reg. using TokenizerTesterMixin
      
      improve tok. tests
      
      improve xlm roberta tok. tests
      
      improve xlm roberta tok. tests
      
      * add subword regularization for big bird t.
      
      * improve xlm roberta tok. test
      
      * add subword regularization for mbart50 tok.
      
      * add subword regularization for pegasus tok.
      
      * add subword regularization for reformer tok.
      
      * add subword regularization for T5 tok.
      
      * fix t5 tok. test formatting
      
      * add subword regularization for xlm_proph. tok.
      
      * add subword regularization for xlnet tok.
      
      * add subword regularization for gert_gen tok.
      
      * add typing to tokenizers
      
      * add typing to xlm rob. tok
      
      * add subword regularization for marian tok.
      
      * add reverse tok. test
      
      * fix marian tok test
      
      * fix marian tok test
      
      * fix casing in tok. tests
      
      * fix style of tok. common test
      
      * fix deberta v2 tok test
      
      * add type annotations to tok. tests
      
      * add type annotations to tok. __init__
      
      * add typing to kokenizer
      
      * add type annotations to tok. __init__
      
      * don't specify the default when it's None
      
      * fix barthez tok. doc
      
      * move sentencepiece tok. tests to TokenizerTesterMixin
      
      * fix unused imports
      
      * fix albert tok. test
      
      * add comment to sentencepiece test options
      
      * fix Any import at big bird tok.
      
      * fix Any import at xlm prophetnet tok.
      
      * empty commit to trigger CI
      37ed3ab7
  26. 04 May, 2021 1 commit
  27. 26 Apr, 2021 2 commits
  28. 23 Apr, 2021 1 commit
  29. 15 Apr, 2021 1 commit
  30. 05 Apr, 2021 1 commit
  31. 31 Mar, 2021 1 commit
  32. 16 Mar, 2021 1 commit
  33. 25 Feb, 2021 1 commit
  34. 02 Feb, 2021 1 commit
  35. 14 Jan, 2021 1 commit
  36. 12 Jan, 2021 1 commit
    • Sylvain Gugger's avatar
      Refactor `prepare_seq2seq_batch` (#9524) · 063d8d27
      Sylvain Gugger authored
      * Add target contextmanager and rework prepare_seq2seq_batch
      
      * Fix tests, treat BART and Barthez
      
      * Add last tokenizers
      
      * Fix test
      
      * Set src token before calling the superclass
      
      * Remove special behavior for T5
      
      * Remove needless imports
      
      * Remove needless asserts
      063d8d27
  37. 15 Dec, 2020 1 commit
    • NielsRogge's avatar
      [WIP] Tapas v4 (tres) (#9117) · 1551e2dc
      NielsRogge authored
      
      
      * First commit: adding all files from tapas_v3
      
      * Fix multiple bugs including soft dependency and new structure of the library
      
      * Improve testing by adding torch_device to inputs and adding dependency on scatter
      
      * Use Python 3 inheritance rather than Python 2
      
      * First draft model cards of base sized models
      
      * Remove model cards as they are already on the hub
      
      * Fix multiple bugs with integration tests
      
      * All model integration tests pass
      
      * Remove print statement
      
      * Add test for convert_logits_to_predictions method of TapasTokenizer
      
      * Incorporate suggestions by Google authors
      
      * Fix remaining tests
      
      * Change position embeddings sizes to 512 instead of 1024
      
      * Comment out positional embedding sizes
      
      * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
      
      * Added more model names
      
      * Fix truncation when no max length is specified
      
      * Disable torchscript test
      
      * Make style & make quality
      
      * Quality
      
      * Address CI needs
      
      * Test the Masked LM model
      
      * Fix the masked LM model
      
      * Truncate when overflowing
      
      * More much needed docs improvements
      
      * Fix some URLs
      
      * Some more docs improvements
      
      * Test PyTorch scatter
      
      * Set to slow + minify
      
      * Calm flake8 down
      
      * First commit: adding all files from tapas_v3
      
      * Fix multiple bugs including soft dependency and new structure of the library
      
      * Improve testing by adding torch_device to inputs and adding dependency on scatter
      
      * Use Python 3 inheritance rather than Python 2
      
      * First draft model cards of base sized models
      
      * Remove model cards as they are already on the hub
      
      * Fix multiple bugs with integration tests
      
      * All model integration tests pass
      
      * Remove print statement
      
      * Add test for convert_logits_to_predictions method of TapasTokenizer
      
      * Incorporate suggestions by Google authors
      
      * Fix remaining tests
      
      * Change position embeddings sizes to 512 instead of 1024
      
      * Comment out positional embedding sizes
      
      * Update PRETRAINED_VOCAB_FILES_MAP and PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
      
      * Added more model names
      
      * Fix truncation when no max length is specified
      
      * Disable torchscript test
      
      * Make style & make quality
      
      * Quality
      
      * Address CI needs
      
      * Test the Masked LM model
      
      * Fix the masked LM model
      
      * Truncate when overflowing
      
      * More much needed docs improvements
      
      * Fix some URLs
      
      * Some more docs improvements
      
      * Add add_pooling_layer argument to TapasModel
      
      Fix comments by @sgugger and @patrickvonplaten
      
      * Fix issue in docs + fix style and quality
      
      * Clean up conversion script and add task parameter to TapasConfig
      
      * Revert the task parameter of TapasConfig
      
      Some minor fixes
      
      * Improve conversion script and add test for absolute position embeddings
      
      * Improve conversion script and add test for absolute position embeddings
      
      * Fix bug with reset_position_index_per_cell arg of the conversion cli
      
      * Add notebooks to the examples directory and fix style and quality
      
      * Apply suggestions from code review
      
      * Move from `nielsr/` to `google/` namespace
      
      * Apply Sylvain's comments
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarRogge Niels <niels.rogge@howest.be>
      Co-authored-by: default avatarLysandreJik <lysandre.debut@reseau.eseo.fr>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarsgugger <sylvain.gugger@gmail.com>
      1551e2dc
  38. 02 Dec, 2020 1 commit
    • Nicolas Patry's avatar
      Warning about too long input for fast tokenizers too (#8799) · a8c3f9aa
      Nicolas Patry authored
      * Warning about too long input for fast tokenizers too
      
      If truncation is not set in tokenizers, but the tokenization is too long
      for the model (`model_max_length`), we used to trigger a warning that
      
      The input would probably fail (which it most likely will).
      
      This PR re-enables the warning for fast tokenizers too and uses common
      code for the trigger to make sure it's consistent across.
      
      * Checking for pair of inputs too.
      
      * Making the function private and adding it's doc.
      
      * Remove formatting ?? in odd place.
      
      * Missed uppercase.
      a8c3f9aa
  39. 17 Nov, 2020 1 commit