1. 10 Nov, 2020 2 commits
  2. 09 Nov, 2020 2 commits
  3. 07 Nov, 2020 1 commit
  4. 30 Oct, 2020 1 commit
    • Sam Shleifer's avatar
      TFMarian, TFMbart, TFPegasus, TFBlenderbot (#7987) · 566b083e
      Sam Shleifer authored
      
      
      * Start plumbing
      
      * Marian close
      
      * Small stubs for all children
      
      * Fixed bart
      
      * marian working
      
      * pegasus test is good, but failing
      
      * Checkin tests
      
      * More model files
      
      * Subtle marian, pegasus integration test failures
      
      * Works well
      
      * rm print
      
      * boom boom
      
      * Still failing model2doc
      
      * merge master
      
      * Equivalence test failing, all others fixed
      
      * cleanup
      
      * Fix embed_scale
      
      * Cleanup marian pipeline test
      
      * Undo extra changes
      
      * Smaller delta
      
      * Cleanup model testers
      
      * undo delta
      
      * fix tests import structure
      
      * cross test decorator
      
      * Cleaner set_weights
      
      * Respect authorized_unexpected_keys
      
      * No warnings
      
      * No warnings
      
      * style
      
      * Nest tf import
      
      * black
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * functional dropout
      
      * fixup
      
      * Fixup
      
      * style_doc
      
      * embs
      
      * shape list
      
      * delete slow force_token_id_to_be_generated func
      
      * fixup
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      566b083e
  5. 29 Oct, 2020 1 commit
  6. 27 Oct, 2020 1 commit
  7. 26 Oct, 2020 1 commit
    • Sylvain Gugger's avatar
      Doc styling (#8067) · 08f534d2
      Sylvain Gugger authored
      * Important files
      
      * Styling them all
      
      * Revert "Styling them all"
      
      This reverts commit 7d029395fdae8513b8281cbc2a6c239f8093503e.
      
      * Syling them for realsies
      
      * Fix syntax error
      
      * Fix benchmark_utils
      
      * More fixes
      
      * Fix modeling auto and script
      
      * Remove new line
      
      * Fixes
      
      * More fixes
      
      * Fix more files
      
      * Style
      
      * Add FSMT
      
      * More fixes
      
      * More fixes
      
      * More fixes
      
      * More fixes
      
      * Fixes
      
      * More fixes
      
      * More fixes
      
      * Last fixes
      
      * Make sphinx happy
      08f534d2
  8. 20 Oct, 2020 2 commits
  9. 19 Oct, 2020 1 commit
    • Weizhen's avatar
      ProphetNet (#7157) · 2422cda0
      Weizhen authored
      
      
      * add new model prophetnet
      
      prophetnet modified
      
      modify codes as suggested v1
      
      add prophetnet test files
      
      * still bugs, because of changed output formats of encoder and decoder
      
      * move prophetnet into the latest version
      
      * clean integration tests
      
      * clean tokenizers
      
      * add xlm config to init
      
      * correct typo in init
      
      * further refactoring
      
      * continue refactor
      
      * save parallel
      
      * add decoder_attention_mask
      
      * fix use_cache vs. past_key_values
      
      * fix common tests
      
      * change decoder output logits
      
      * fix xlm tests
      
      * make common tests pass
      
      * change model architecture
      
      * add tokenizer tests
      
      * finalize model structure
      
      * no weight mapping
      
      * correct n-gram stream attention mask as discussed with qweizhen
      
      * remove unused import
      
      * fix index.rst
      
      * fix tests
      
      * delete unnecessary code
      
      * add fast integration test
      
      * rename weights
      
      * final weight remapping
      
      * save intermediate
      
      * Descriptions for Prophetnet Config File
      
      * finish all models
      
      * finish new model outputs
      
      * delete unnecessary files
      
      * refactor encoder layer
      
      * add dummy docs
      
      * code quality
      
      * fix tests
      
      * add model pages to doctree
      
      * further refactor
      
      * more refactor, more tests
      
      * finish code refactor and tests
      
      * remove unnecessary files
      
      * further clean up
      
      * add docstring template
      
      * finish tokenizer doc
      
      * finish prophetnet
      
      * fix copies
      
      * fix typos
      
      * fix tf tests
      
      * fix fp16
      
      * fix tf test 2nd try
      
      * fix code quality
      
      * add test for each model
      
      * merge new tests to branch
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update src/transformers/modeling_prophetnet.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update utils/check_repo.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * apply sams and sylvains comments
      
      * make style
      
      * remove unnecessary code
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/configuration_prophetnet.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * implement lysandres comments
      
      * correct docs
      
      * fix isort
      
      * fix tokenizers
      
      * fix copies
      Co-authored-by: default avatarweizhen <weizhen@mail.ustc.edu.cn>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2422cda0
  10. 18 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies (#7659) · ba8c4d0a
      Thomas Wolf authored
      * splitting fast and slow tokenizers [WIP]
      
      * [WIP] splitting sentencepiece and tokenizers dependencies
      
      * update dummy objects
      
      * add name_or_path to models and tokenizers
      
      * prefix added to file names
      
      * prefix
      
      * styling + quality
      
      * spliting all the tokenizer files - sorting sentencepiece based ones
      
      * update tokenizer version up to 0.9.0
      
      * remove hard dependency on sentencepiece 馃帀
      
      * and removed hard dependency on tokenizers 馃帀
      
      
      
      * update conversion script
      
      * update missing models
      
      * fixing tests
      
      * move test_tokenization_fast to main tokenization tests - fix bugs
      
      * bump up tokenizers
      
      * fix bert_generation
      
      * update ad fix several tokenizers
      
      * keep sentencepiece in deps for now
      
      * fix funnel and deberta tests
      
      * fix fsmt
      
      * fix marian tests
      
      * fix layoutlm
      
      * fix squeezebert and gpt2
      
      * fix T5 tokenization
      
      * fix xlnet tests
      
      * style
      
      * fix mbart
      
      * bump up tokenizers to 0.9.2
      
      * fix model tests
      
      * fix tf models
      
      * fix seq2seq examples
      
      * fix tests without sentencepiece
      
      * fix slow => fast  conversion without sentencepiece
      
      * update auto and bert generation tests
      
      * fix mbart tests
      
      * fix auto and common test without tokenizers
      
      * fix tests without tokenizers
      
      * clean up tests lighten up when tokenizers + sentencepiece are both off
      
      * style quality and tests fixing
      
      * add sentencepiece to doc/examples reqs
      
      * leave sentencepiece on for now
      
      * style quality split hebert and fix pegasus
      
      * WIP Herbert fast
      
      * add sample_text_no_unicode and fix hebert tokenization
      
      * skip FSMT example test for now
      
      * fix style
      
      * fix fsmt in example tests
      
      * update following Lysandre and Sylvain's comments
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      ba8c4d0a
  11. 15 Oct, 2020 1 commit
    • Stas Bekman's avatar
      fix DeprecationWarning (#7834) · a5a8eeb7
      Stas Bekman authored
      in `tests/test_utils_check_copies.py` I was getting intermittently:
      ```
      utils/check_copies.py:52
        /mnt/nvme1/code/transformers-comet/utils/check_copies.py:52: DeprecationWarning: invalid escape sequence \s
          while line_index < len(lines) and re.search(f"^{indent}(class|def)\s+{name}", lines[line_index]) is None:
      ```
      So this should fix it.
      a5a8eeb7
  12. 09 Oct, 2020 3 commits
  13. 05 Oct, 2020 2 commits
  14. 30 Sep, 2020 1 commit
  15. 28 Sep, 2020 1 commit
  16. 25 Sep, 2020 1 commit
  17. 24 Sep, 2020 2 commits
  18. 22 Sep, 2020 2 commits
  19. 10 Sep, 2020 1 commit
    • Patrick von Platen's avatar
      Add "Leveraging Pretrained Checkpoints for Generation" Seq2Seq models. (#6594) · 7fd1febf
      Patrick von Platen authored
      * add conversion script
      
      * improve conversion script
      
      * make style
      
      * add tryout files
      
      * fix
      
      * update
      
      * add causal bert
      
      * better names
      
      * add tokenizer file as well
      
      * finish causal_bert
      
      * fix small bugs
      
      * improve generate
      
      * change naming
      
      * renaming
      
      * renaming
      
      * renaming
      
      * remove leftover files
      
      * clean files
      
      * add fix tokenizer
      
      * finalize
      
      * correct slow test
      
      * update docs
      
      * small fixes
      
      * fix link
      
      * adapt check repo
      
      * apply sams and sylvains recommendations
      
      * fix import
      
      * implement Lysandres recommendations
      
      * fix logger warn
      7fd1febf
  20. 08 Sep, 2020 1 commit
  21. 26 Aug, 2020 1 commit
  22. 14 Aug, 2020 1 commit
    • Suraj Patil's avatar
      MBartForConditionalGeneration (#6441) · 680f1337
      Suraj Patil authored
      * add MBartForConditionalGeneration
      
      * style
      
      * rebase and fixes
      
      * add mbart test in TEST_FILES_WITH_NO_COMMON_TESTS
      
      * fix docs
      
      * don't ignore mbart
      
      * doc
      
      * fix mbart fairseq link
      
      * put mbart before bart
      
      * apply doc suggestions
      680f1337
  23. 12 Aug, 2020 1 commit
  24. 10 Aug, 2020 1 commit
    • Lysandre Debut's avatar
      Patch models (#6326) · b99098ab
      Lysandre Debut authored
      * TFAlbertFor{TokenClassification, MultipleChoice}
      
      * Patch models
      
      * BERT and TF BERT info
      
      
      s
      
      * Update check_repo
      b99098ab
  25. 07 Aug, 2020 1 commit
  26. 02 Mar, 2020 1 commit
    • Julien Chaumond's avatar
      TF GPU CI (#3085) · f169957d
      Julien Chaumond authored
      * debug env
      
      * Restrict TF GPU memory
      
      * Fixup
      
      * One more test
      
      * rm debug logs
      
      * Fixup
      f169957d
  27. 06 Jan, 2020 2 commits
  28. 22 Dec, 2019 3 commits
  29. 21 Dec, 2019 1 commit
    • Aymeric Augustin's avatar
      Reformat source code with black. · fa84ae26
      Aymeric Augustin authored
      This is the result of:
      
          $ black --line-length 119 examples templates transformers utils hubconf.py setup.py
      
      There's a lot of fairly long lines in the project. As a consequence, I'm
      picking the longest widely accepted line length, 119 characters.
      
      This is also Thomas' preference, because it allows for explicit variable
      names, to make the code easier to understand.
      fa84ae26