1. 26 Oct, 2020 6 commits
    • Sylvain Gugger's avatar
      Doc fixes in preparation for the docstyle PR (#8061) · 04a17f85
      Sylvain Gugger authored
      * Fixes in preparation for doc styling
      
      * More fixes
      
      * Better syntax
      
      * Fixes
      
      * Style
      
      * More fixes
      
      * More fixes
      04a17f85
    • Yusuke Mori's avatar
    • Samuel's avatar
      Minor typo fixes to the preprocessing tutorial in the docs (#8046) · fc2d6eac
      Samuel authored
      
      
      * Fix minor typos
      
      Fix minor typos in the docs.
      
      * Update docs/source/preprocessing.rst
      
      Clearer data structure description.
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      fc2d6eac
    • noise-field's avatar
      Mlflow integration callback (#8016) · c48b16b8
      noise-field authored
      * Add MLflow integration class
      
      Add integration code for MLflow in integrations.py along with the code
      that checks that MLflow is installed.
      
      * Add MLflowCallback import
      
      Add import of MLflowCallback in trainer.py
      
      * Handle model argument
      
      Allow the callback to handle model argument and store model config items as hyperparameters.
      
      * Log parameters to MLflow in batches
      
      MLflow cannot log more than a hundred parameters at once.
      Code added to split the parameters into batches of 100 items and log the batches one by one.
      
      * Fix style
      
      * Add docs on MLflow callback
      
      * Fix issue with unfinished runs
      
      The "fluent" api used in MLflow integration allows only one run to be active at any given moment. If the Trainer is disposed off and a new one is created, but the training is not finished, it will refuse to log the results when the next trainer is created.
      
      * Add MLflow integration class
      
      Add integration code for MLflow in integrations.py along with the code
      that checks that MLflow is installed.
      
      * Add MLflowCallback import
      
      Add import of MLflowCallback in trainer.py
      
      * Handle model argument
      
      Allow the callback to handle model argument and store model config items as hyperparameters.
      
      * Log parameters to MLflow in batches
      
      MLflow cannot log more than a hundred parameters at once.
      Code added to split the parameters into batches of 100 items and log the batches one by one.
      
      * Fix style
      
      * Add docs on MLflow callback
      
      * Fix issue with unfinished runs
      
      The "fluent" api used in MLflow integration allows only one run to be active at any given moment. If the Trainer is disposed off and a new one is created, but the training is not finished, it will refuse to log the results when the next trainer is created.
      c48b16b8
    • Stas Bekman's avatar
      [docs] [testing] distributed training (#7993) · 101186bc
      Stas Bekman authored
      * distributed training
      
      * fix
      
      * fix formatting
      
      * wording
      101186bc
    • Samuel's avatar
      Minor typo fixes to the tokenizer summary (#8045) · 9aa28266
      Samuel authored
      Minor typo fixes to the tokenizer summary
      9aa28266
  2. 22 Oct, 2020 1 commit
  3. 21 Oct, 2020 1 commit
    • Sam Shleifer's avatar
      Add TFBartForConditionalGeneration (#5411) · 82984215
      Sam Shleifer authored
      
      
      * half done
      
      * doc improvement
      
      * Cp test file
      
      * brokedn
      
      * broken test
      
      * undo some mess
      
      * ckpt
      
      * borked
      
      * Halfway
      
      * 6 passing
      
      * boom boom
      
      * Much progress but still 6
      
      * boom boom
      
      * merged master
      
      * 10 passing
      
      * boom boom
      
      * Style
      
      * no t5 changes
      
      * 13 passing
      
      * Integration test failing, but not gibberish
      
      * Frustrated
      
      * Merged master
      
      * 4 fail
      
      * 4 fail
      
      * fix return_dict
      
      * boom boom
      
      * Still only 4
      
      * prepare method
      
      * prepare method
      
      * before delete classif
      
      * Skip tests to avoid adding boilerplate
      
      * boom boom
      
      * fast tests passing
      
      * style
      
      * boom boom
      
      * Switch to supporting many input types
      
      * remove FIXMENORM
      
      * working
      
      * Fixed past_key_values/decoder_cached_states confusion
      
      * new broken test
      
      * Fix attention mask kwarg name
      
      * undo accidental
      
      * Style and reviewers
      
      * style
      
      * Docs and common tests
      
      * Cleaner assert messages
      
      * copy docs
      
      * style issues
      
      * Sphinx fix
      
      * Simplify caching logic
      
      * test does not require torch
      
      * copy _NoLayerEmbedTokens
      
      * Update src/transformers/modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update tests/test_modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Line length and dont document None
      
      * Add pipeline test coverage
      
      * assert msg
      
      * At parity
      
      * Assert messages
      
      * mark slow
      
      * Update compile test
      
      * back in init
      
      * Merge master
      
      * Fix tests
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      82984215
  4. 20 Oct, 2020 7 commits
  5. 19 Oct, 2020 4 commits
    • Patrick von Platen's avatar
      fix t5 training docstring (#7911) · e3d2bee8
      Patrick von Platen authored
      e3d2bee8
    • Quentin Lhoest's avatar
      Allow Custom Dataset in RAG Retriever (#7763) · 033f29c6
      Quentin Lhoest authored
      * add CustomHFIndex
      
      * typo in config
      
      * update tests
      
      * add custom dataset example
      
      * clean script
      
      * update test data
      
      * minor in test
      
      * docs
      
      * docs
      
      * style
      
      * fix imports
      
      * allow to pass the indexed dataset directly
      
      * update tests
      
      * use multiset DPR
      
      * address thom and patrick's comments
      
      * style
      
      * update dpr tokenizer
      
      * add output_dir flag in use_own_knowledge_dataset.py
      
      * allow custom datasets in examples/rag/finetune.py
      
      * add test for custom dataset in distributed rag retriever
      033f29c6
    • Weizhen's avatar
      ProphetNet (#7157) · 2422cda0
      Weizhen authored
      
      
      * add new model prophetnet
      
      prophetnet modified
      
      modify codes as suggested v1
      
      add prophetnet test files
      
      * still bugs, because of changed output formats of encoder and decoder
      
      * move prophetnet into the latest version
      
      * clean integration tests
      
      * clean tokenizers
      
      * add xlm config to init
      
      * correct typo in init
      
      * further refactoring
      
      * continue refactor
      
      * save parallel
      
      * add decoder_attention_mask
      
      * fix use_cache vs. past_key_values
      
      * fix common tests
      
      * change decoder output logits
      
      * fix xlm tests
      
      * make common tests pass
      
      * change model architecture
      
      * add tokenizer tests
      
      * finalize model structure
      
      * no weight mapping
      
      * correct n-gram stream attention mask as discussed with qweizhen
      
      * remove unused import
      
      * fix index.rst
      
      * fix tests
      
      * delete unnecessary code
      
      * add fast integration test
      
      * rename weights
      
      * final weight remapping
      
      * save intermediate
      
      * Descriptions for Prophetnet Config File
      
      * finish all models
      
      * finish new model outputs
      
      * delete unnecessary files
      
      * refactor encoder layer
      
      * add dummy docs
      
      * code quality
      
      * fix tests
      
      * add model pages to doctree
      
      * further refactor
      
      * more refactor, more tests
      
      * finish code refactor and tests
      
      * remove unnecessary files
      
      * further clean up
      
      * add docstring template
      
      * finish tokenizer doc
      
      * finish prophetnet
      
      * fix copies
      
      * fix typos
      
      * fix tf tests
      
      * fix fp16
      
      * fix tf test 2nd try
      
      * fix code quality
      
      * add test for each model
      
      * merge new tests to branch
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update src/transformers/modeling_prophetnet.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update utils/check_repo.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * apply sams and sylvains comments
      
      * make style
      
      * remove unnecessary code
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/configuration_prophetnet.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * implement lysandres comments
      
      * correct docs
      
      * fix isort
      
      * fix tokenizers
      
      * fix copies
      Co-authored-by: default avatarweizhen <weizhen@mail.ustc.edu.cn>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2422cda0
    • Stas Bekman's avatar
      remove USE_CUDA (#7861) · 4eb61f8e
      Stas Bekman authored
      4eb61f8e
  6. 18 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies (#7659) · ba8c4d0a
      Thomas Wolf authored
      * splitting fast and slow tokenizers [WIP]
      
      * [WIP] splitting sentencepiece and tokenizers dependencies
      
      * update dummy objects
      
      * add name_or_path to models and tokenizers
      
      * prefix added to file names
      
      * prefix
      
      * styling + quality
      
      * spliting all the tokenizer files - sorting sentencepiece based ones
      
      * update tokenizer version up to 0.9.0
      
      * remove hard dependency on sentencepiece 馃帀
      
      * and removed hard dependency on tokenizers 馃帀
      
      
      
      * update conversion script
      
      * update missing models
      
      * fixing tests
      
      * move test_tokenization_fast to main tokenization tests - fix bugs
      
      * bump up tokenizers
      
      * fix bert_generation
      
      * update ad fix several tokenizers
      
      * keep sentencepiece in deps for now
      
      * fix funnel and deberta tests
      
      * fix fsmt
      
      * fix marian tests
      
      * fix layoutlm
      
      * fix squeezebert and gpt2
      
      * fix T5 tokenization
      
      * fix xlnet tests
      
      * style
      
      * fix mbart
      
      * bump up tokenizers to 0.9.2
      
      * fix model tests
      
      * fix tf models
      
      * fix seq2seq examples
      
      * fix tests without sentencepiece
      
      * fix slow => fast  conversion without sentencepiece
      
      * update auto and bert generation tests
      
      * fix mbart tests
      
      * fix auto and common test without tokenizers
      
      * fix tests without tokenizers
      
      * clean up tests lighten up when tokenizers + sentencepiece are both off
      
      * style quality and tests fixing
      
      * add sentencepiece to doc/examples reqs
      
      * leave sentencepiece on for now
      
      * style quality split hebert and fix pegasus
      
      * WIP Herbert fast
      
      * add sample_text_no_unicode and fix hebert tokenization
      
      * skip FSMT example test for now
      
      * fix style
      
      * fix fsmt in example tests
      
      * update following Lysandre and Sylvain's comments
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      ba8c4d0a
  7. 15 Oct, 2020 1 commit
  8. 14 Oct, 2020 1 commit
  9. 13 Oct, 2020 2 commits
  10. 09 Oct, 2020 5 commits
  11. 08 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      Adding Fast tokenizers for SentencePiece based tokenizers - Breaking: remove... · 9aeacb58
      Thomas Wolf authored
      
      Adding Fast tokenizers for SentencePiece based tokenizers - Breaking: remove Transfo-XL fast tokenizer (#7141)
      
      * [WIP] SP tokenizers
      
      * fixing tests for T5
      
      * WIP tokenizers
      
      * serialization
      
      * update T5
      
      * WIP T5 tokenization
      
      * slow to fast conversion script
      
      * Refactoring to move tokenzier implementations inside transformers
      
      * Adding gpt - refactoring - quality
      
      * WIP adding several tokenizers to the fast world
      
      * WIP Roberta - moving implementations
      
      * update to dev4 switch file loading to in-memory loading
      
      * Updating and fixing
      
      * advancing on the tokenizers - updating do_lower_case
      
      * style and quality
      
      * moving forward with tokenizers conversion and tests
      
      * MBart, T5
      
      * dumping the fast version of transformer XL
      
      * Adding to autotokenizers + style/quality
      
      * update init and space_between_special_tokens
      
      * style and quality
      
      * bump up tokenizers version
      
      * add protobuf
      
      * fix pickle Bert JP with Mecab
      
      * fix newly added tokenizers
      
      * style and quality
      
      * fix bert japanese
      
      * fix funnel
      
      * limite tokenizer warning to one occurence
      
      * clean up file
      
      * fix new tokenizers
      
      * fast tokenizers deep tests
      
      * WIP adding all the special fast tests on the new fast tokenizers
      
      * quick fix
      
      * adding more fast tokenizers in the fast tests
      
      * all tokenizers in fast version tested
      
      * Adding BertGenerationFast
      
      * bump up setup.py for CI
      
      * remove BertGenerationFast (too early)
      
      * bump up tokenizers version
      
      * Clean old docstrings
      
      * Typo
      
      * Update following Lysandre comments
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      9aeacb58
  12. 07 Oct, 2020 2 commits
  13. 06 Oct, 2020 2 commits
  14. 05 Oct, 2020 5 commits
    • Lysandre Debut's avatar
      The toggle actually sticks (#7586) · 818c294f
      Lysandre Debut authored
      818c294f
    • Sylvain Gugger's avatar
      Check and update model list in index.rst automatically (#7527) · b2b7fc78
      Sylvain Gugger authored
      * Check and update model list in index.rst automatically
      
      * Check and update model list in index.rst automatically
      
      * Adapt template
      b2b7fc78
    • Amine Abdaoui's avatar
      docs(pretrained_models): fix num parameters (#7575) · 0d79de73
      Amine Abdaoui authored
      
      
      * docs(pretrained_models): fix num parameters
      
      * fix(pretrained_models): correct typo
      Co-authored-by: default avatarAmin <amin.geotrend@gmail.com>
      0d79de73
    • Forrest Iandola's avatar
      SqueezeBERT architecture (#7083) · 02ef825b
      Forrest Iandola authored
      * configuration_squeezebert.py
      
      thin wrapper around bert tokenizer
      
      fix typos
      
      wip sb model code
      
      wip modeling_squeezebert.py. Next step is to get the multi-layer-output interface working
      
      set up squeezebert to use BertModelOutput when returning results.
      
      squeezebert documentation
      
      formatting
      
      allow head mask that is an array of [None, ..., None]
      
      docs
      
      docs cont'd
      
      path to vocab
      
      docs and pointers to cloud files (WIP)
      
      line length and indentation
      
      squeezebert model cards
      
      formatting of model cards
      
      untrack modeling_squeezebert_scratchpad.py
      
      update aws paths to vocab and config files
      
      get rid of stub of NSP code, and advise users to pretrain with mlm only
      
      fix rebase issues
      
      redo rebase of modeling_auto.py
      
      fix issues with code formatting
      
      more code format auto-fixes
      
      move squeezebert before bert in tokenization_auto.py and modeling_auto.py because squeezebert inherits from bert
      
      tests for squeezebert modeling and tokenization
      
      fix typo
      
      move squeezebert before bert in modeling_auto.py to fix inheritance problem
      
      disable test_head_masking, since squeezebert doesn't yet implement head masking
      
      fix issues exposed by the test_modeling_squeezebert.py
      
      fix an issue exposed by test_tokenization_squeezebert.py
      
      fix issue exposed by test_modeling_squeezebert.py
      
      auto generated code style improvement
      
      issue that we inherited from modeling_xxx.py: SqueezeBertForMaskedLM.forward() calls self.cls(), but there is no self.cls, and I think the goal was actually to call self.lm_head()
      
      update copyright
      
      resolve failing 'test_hidden_states_output' and remove unused encoder_hidden_states and encoder_attention_mask
      
      docs
      
      add integration test. rename squeezebert-mnli --> squeezebert/squeezebert-mnli
      
      autogenerated formatting tweaks
      
      integrate feedback from patrickvonplaten and sgugger to programming style and documentation strings
      
      * tiny change to order of imports
      02ef825b
    • Sylvain Gugger's avatar
      Cleanup documentation for BART, Marian, MBART and Pegasus (#7523) · e2c935f5
      Sylvain Gugger authored
      * Cleanup documentation for BART, Marian, MBART and Pegasus
      
      * Cleanup documentation for BART, Marian, MBART and Pegasus
      e2c935f5
  15. 01 Oct, 2020 1 commit