1. 06 Jul, 2023 1 commit
  2. 30 May, 2023 1 commit
  3. 06 Apr, 2023 1 commit
    • Nicolas Patry's avatar
      Adding Llama FastTokenizer support. (#22264) · 1670be4b
      Nicolas Patry authored
      * Adding Llama FastTokenizer support.
      
      - Requires https://github.com/huggingface/tokenizers/pull/1183 version
      - Only support byte_fallback for llama, raise otherwise (safety net).
      - Lots of questions are special tokens
      
      How to test:
      
      ```python
      
      from transformers.convert_slow_tokenizer import convert_slow_tokenizer
      from transformers import AutoTokenizer
      from tokenizers import Tokenizer
      
      tokenizer = AutoTokenizer.from_pretrained("huggingface/llama-7b")
      
      if False:
          new_tokenizer = Tokenizer.from_file("tok.json")
      else:
          new_tokenizer = convert_slow_tokenizer(tokenizer)
          new_tokenizer.save("tok.json")
      
      strings = [
          "This is a test",
          "生活的真谛是",
          "生活的真谛是[MASK]。",
          # XXX: This one is problematic because of special tokens
          # "<s> Something something",
      ]
      
      for string in strings:
          encoded = tokenizer(string)["input_ids"]
          encoded2 = new_tokenizer.encode(string).ids
      
          assert encoded == encoded2, f"{encoded} != {encoded2}"
      
          decoded = tokenizer.decode(encoded)
          decoded2 = new_tokenizer.decode(encoded2)
      
          assert decoded.strip() == decoded2, f"{repr(decoded)} != {repr(decoded2)}"
      ```
      
      The converter + some test script.
      
      The test script.
      
      Tmp save.
      
      Adding Fast tokenizer + tests.
      
      Adding the tokenization tests.
      
      Correct combination.
      
      Small fix.
      
      Fixing tests.
      
      Fixing with latest update.
      
      Rebased.
      
      fix copies + normalized added tokens  + copies.
      
      Adding doc.
      
      TMP.
      
      Doc + split files.
      
      Doc.
      
      Versions + try import.
      
      Fix Camembert + warnings -> Error.
      
      Fix by ArthurZucker.
      
      Not a decorator.
      
      * Fixing comments.
      
      * Adding more to docstring.
      
      * Doc rewriting.
      1670be4b
  4. 03 Apr, 2023 1 commit
    • Arthur's avatar
      Fix llama tokenizer (#22402) · c0f99b4d
      Arthur authored
      * draft
      
      * update tokenization limma and conversion script
      
      * more udpates
      
      * initial commit
      
      * style
      
      * default pad to None
      
      * draft tokenization tests
      
      * update test
      
      * update tokenization tests
      
      * nits
      
      * update
      
      * versioning test
      
      * major fix
      
      * fix more testst
      
      * finish fixing special masks
      
      * last nit
      
      * more nits
      
      * add encode decode tests
      
      * add more
      
      * fix token type ids
      
      * style
      c0f99b4d
  5. 06 Feb, 2023 1 commit
    • Sylvain Gugger's avatar
      Update quality tooling for formatting (#21480) · 6f79d264
      Sylvain Gugger authored
      * Result of black 23.1
      
      * Update target to Python 3.7
      
      * Switch flake8 to ruff
      
      * Configure isort
      
      * Configure isort
      
      * Apply isort with line limit
      
      * Put the right black version
      
      * adapt black in check copies
      
      * Fix copies
      6f79d264
  6. 29 Jul, 2022 1 commit
  7. 18 Jul, 2022 1 commit
  8. 12 May, 2022 1 commit
  9. 03 May, 2022 1 commit
    • Yih-Dar's avatar
      Move test model folders (#17034) · 19420fd9
      Yih-Dar authored
      
      
      * move test model folders (TODO: fix imports and others)
      
      * fix (potentially partially) imports (in model test modules)
      
      * fix (potentially partially) imports (in tokenization test modules)
      
      * fix (potentially partially) imports (in feature extraction test modules)
      
      * fix import utils.test_modeling_tf_core
      
      * fix path ../fixtures/
      
      * fix imports about generation.test_generation_flax_utils
      
      * fix more imports
      
      * fix fixture path
      
      * fix get_test_dir
      
      * update module_to_test_file
      
      * fix get_tests_dir from wrong transformers.utils
      
      * update config.yml (CircleCI)
      
      * fix style
      
      * remove missing imports
      
      * update new model script
      
      * update check_repo
      
      * update SPECIAL_MODULE_TO_TEST_MAP
      
      * fix style
      
      * add __init__
      
      * update self-scheduled
      
      * fix add_new_model scripts
      
      * check one way to get location back
      
      * python setup.py build install
      
      * fix import in test auto
      
      * update self-scheduled.yml
      
      * update slack notification script
      
      * Add comments about artifact names
      
      * fix for yolos
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      19420fd9
  10. 23 Feb, 2022 1 commit
  11. 27 Jan, 2022 1 commit
    • SaulLu's avatar
      improve saving strategy of sentencepiece tokenizer (#15328) · ade7371a
      SaulLu authored
      
      
      * add new test
      
      * add a feature to same the sentencepiece tokenizer model when the init file was deleted
      
      * update marian
      
      * update m2m_100
      
      * fix marian
      
      * update speech to text
      
      * override test for layoutxlm
      
      * fix saving bartpho
      
      * remove harcoded values bartpho
      
      * special token string version
      
      * finish bartpho
      
      * override layoutxml test
      
      * add mbart
      
      * move special tokens list
      
      * format
      
      * Revert "format"
      
      This reverts commit 37a40df37903a932c2f951cbd33acb684246bae7.
      
      * simplify list of string of special tokens
      
      * Re-write `self.fairseq_tokens_to_ids ` initialization logic with special tokens
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      ade7371a
  12. 28 Dec, 2021 1 commit
  13. 27 Aug, 2021 1 commit
  14. 16 Apr, 2021 1 commit
  15. 25 Feb, 2021 1 commit
  16. 22 Feb, 2021 1 commit
  17. 06 Jan, 2021 1 commit
    • Sylvain Gugger's avatar
      Fast transformers import part 1 (#9441) · 0c96262f
      Sylvain Gugger authored
      * Don't import libs to check they are available
      
      * Don't import integrations at init
      
      * Add importlib_metdata to deps
      
      * Remove old vars references
      
      * Avoid syntax error
      
      * Adapt testing utils
      
      * Try to appease torchhub
      
      * Add dependency
      
      * Remove more private variables
      
      * Fix typo
      
      * Another typo
      
      * Refine the tf availability test
      0c96262f
  18. 05 Jan, 2021 1 commit
    • Patrick von Platen's avatar
      [PyTorch Bart] Split Bart into different models (#9343) · eef66035
      Patrick von Platen authored
      * first try
      
      * remove old template
      
      * finish bart
      
      * finish mbart
      
      * delete unnecessary line
      
      * init pegasus
      
      * save intermediate
      
      * correct pegasus
      
      * finish pegasus
      
      * remove cookie cutter leftover
      
      * add marian
      
      * finish blenderbot
      
      * replace in file
      
      * correctly split blenderbot
      
      * delete "old" folder
      
      * correct "add statement"
      
      * adapt config for tf comp
      
      * correct configs for tf
      
      * remove ipdb
      
      * fix more stuff
      
      * fix mbart
      
      * push pegasus fix
      
      * fix mbart
      
      * more fixes
      
      * fix research projects code
      
      * finish docs for bart, mbart, and marian
      
      * delete unnecessary file
      
      * correct attn typo
      
      * correct configs
      
      * remove pegasus for seq class
      
      * correct peg docs
      
      * correct peg docs
      
      * finish configs
      
      * further improve docs
      
      * add copied from statements to mbart
      
      * fix copied from in mbart
      
      * add copy statements to marian
      
      * add copied from to marian
      
      * add pegasus copied from
      
      * finish pegasus
      
      * finish copied from
      
      * Apply suggestions from code review
      
      * make style
      
      * backward comp blenderbot
      
      * apply lysandres and sylvains suggestions
      
      * apply suggestions
      
      * push last fixes
      
      * fix docs
      
      * fix tok tests
      
      * fix imports code style
      
      * fix doc
      eef66035
  19. 07 Dec, 2020 1 commit
  20. 19 Nov, 2020 1 commit
  21. 17 Nov, 2020 2 commits
  22. 18 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies (#7659) · ba8c4d0a
      Thomas Wolf authored
      * splitting fast and slow tokenizers [WIP]
      
      * [WIP] splitting sentencepiece and tokenizers dependencies
      
      * update dummy objects
      
      * add name_or_path to models and tokenizers
      
      * prefix added to file names
      
      * prefix
      
      * styling + quality
      
      * spliting all the tokenizer files - sorting sentencepiece based ones
      
      * update tokenizer version up to 0.9.0
      
      * remove hard dependency on sentencepiece 🎉
      
      * and removed hard dependency on tokenizers 🎉
      
      
      
      * update conversion script
      
      * update missing models
      
      * fixing tests
      
      * move test_tokenization_fast to main tokenization tests - fix bugs
      
      * bump up tokenizers
      
      * fix bert_generation
      
      * update ad fix several tokenizers
      
      * keep sentencepiece in deps for now
      
      * fix funnel and deberta tests
      
      * fix fsmt
      
      * fix marian tests
      
      * fix layoutlm
      
      * fix squeezebert and gpt2
      
      * fix T5 tokenization
      
      * fix xlnet tests
      
      * style
      
      * fix mbart
      
      * bump up tokenizers to 0.9.2
      
      * fix model tests
      
      * fix tf models
      
      * fix seq2seq examples
      
      * fix tests without sentencepiece
      
      * fix slow => fast  conversion without sentencepiece
      
      * update auto and bert generation tests
      
      * fix mbart tests
      
      * fix auto and common test without tokenizers
      
      * fix tests without tokenizers
      
      * clean up tests lighten up when tokenizers + sentencepiece are both off
      
      * style quality and tests fixing
      
      * add sentencepiece to doc/examples reqs
      
      * leave sentencepiece on for now
      
      * style quality split hebert and fix pegasus
      
      * WIP Herbert fast
      
      * add sample_text_no_unicode and fix hebert tokenization
      
      * skip FSMT example test for now
      
      * fix style
      
      * fix fsmt in example tests
      
      * update following Lysandre and Sylvain's comments
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      ba8c4d0a
  23. 08 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      Adding Fast tokenizers for SentencePiece based tokenizers - Breaking: remove... · 9aeacb58
      Thomas Wolf authored
      
      Adding Fast tokenizers for SentencePiece based tokenizers - Breaking: remove Transfo-XL fast tokenizer (#7141)
      
      * [WIP] SP tokenizers
      
      * fixing tests for T5
      
      * WIP tokenizers
      
      * serialization
      
      * update T5
      
      * WIP T5 tokenization
      
      * slow to fast conversion script
      
      * Refactoring to move tokenzier implementations inside transformers
      
      * Adding gpt - refactoring - quality
      
      * WIP adding several tokenizers to the fast world
      
      * WIP Roberta - moving implementations
      
      * update to dev4 switch file loading to in-memory loading
      
      * Updating and fixing
      
      * advancing on the tokenizers - updating do_lower_case
      
      * style and quality
      
      * moving forward with tokenizers conversion and tests
      
      * MBart, T5
      
      * dumping the fast version of transformer XL
      
      * Adding to autotokenizers + style/quality
      
      * update init and space_between_special_tokens
      
      * style and quality
      
      * bump up tokenizers version
      
      * add protobuf
      
      * fix pickle Bert JP with Mecab
      
      * fix newly added tokenizers
      
      * style and quality
      
      * fix bert japanese
      
      * fix funnel
      
      * limite tokenizer warning to one occurence
      
      * clean up file
      
      * fix new tokenizers
      
      * fast tokenizers deep tests
      
      * WIP adding all the special fast tests on the new fast tokenizers
      
      * quick fix
      
      * adding more fast tokenizers in the fast tests
      
      * all tokenizers in fast version tested
      
      * Adding BertGenerationFast
      
      * bump up setup.py for CI
      
      * remove BertGenerationFast (too early)
      
      * bump up tokenizers version
      
      * Clean old docstrings
      
      * Typo
      
      * Update following Lysandre comments
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      9aeacb58
  24. 28 Aug, 2020 1 commit
    • Sam Shleifer's avatar
      prepare_seq2seq_batch makes labels/ decoder_input_ids made later. (#6654) · 9336086a
      Sam Shleifer authored
      * broken test
      
      * batch parity
      
      * tests pass
      
      * boom boom
      
      * boom boom
      
      * split out bart tokenizer tests
      
      * fix tests
      
      * boom boom
      
      * Fixed dataset bug
      
      * Fix marian
      
      * Undo extra
      
      * Get marian working
      
      * Fix t5 tok tests
      
      * Test passing
      
      * Cleanup
      
      * better assert msg
      
      * require torch
      
      * Fix mbart tests
      
      * undo extra decoder_attn_mask change
      
      * Fix import
      
      * pegasus tokenizer can ignore src_lang kwargs
      
      * unused kwarg test cov
      
      * boom boom
      
      * add todo for pegasus issue
      
      * cover one word translation edge case
      
      * Cleanup
      
      * doc
      9336086a
  25. 26 Aug, 2020 1 commit
  26. 11 Aug, 2020 1 commit
  27. 28 Jul, 2020 2 commits
  28. 23 Jul, 2020 1 commit
  29. 07 Jul, 2020 1 commit