1. 17 Nov, 2020 3 commits
  2. 16 Nov, 2020 1 commit
    • Sylvain Gugger's avatar
      Switch `return_dict` to `True` by default. (#8530) · 1073a2bd
      Sylvain Gugger authored
      * Use the CI to identify failing tests
      
      * Remove from all examples and tests
      
      * More default switch
      
      * Fixes
      
      * More test fixes
      
      * More fixes
      
      * Last fixes hopefully
      
      * Use the CI to identify failing tests
      
      * Remove from all examples and tests
      
      * More default switch
      
      * Fixes
      
      * More test fixes
      
      * More fixes
      
      * Last fixes hopefully
      
      * Run on the real suite
      
      * Fix slow tests
      1073a2bd
  3. 12 Nov, 2020 1 commit
  4. 11 Nov, 2020 2 commits
  5. 10 Nov, 2020 1 commit
  6. 09 Nov, 2020 2 commits
  7. 05 Nov, 2020 2 commits
  8. 30 Oct, 2020 1 commit
    • Sam Shleifer's avatar
      TFMarian, TFMbart, TFPegasus, TFBlenderbot (#7987) · 566b083e
      Sam Shleifer authored
      
      
      * Start plumbing
      
      * Marian close
      
      * Small stubs for all children
      
      * Fixed bart
      
      * marian working
      
      * pegasus test is good, but failing
      
      * Checkin tests
      
      * More model files
      
      * Subtle marian, pegasus integration test failures
      
      * Works well
      
      * rm print
      
      * boom boom
      
      * Still failing model2doc
      
      * merge master
      
      * Equivalence test failing, all others fixed
      
      * cleanup
      
      * Fix embed_scale
      
      * Cleanup marian pipeline test
      
      * Undo extra changes
      
      * Smaller delta
      
      * Cleanup model testers
      
      * undo delta
      
      * fix tests import structure
      
      * cross test decorator
      
      * Cleaner set_weights
      
      * Respect authorized_unexpected_keys
      
      * No warnings
      
      * No warnings
      
      * style
      
      * Nest tf import
      
      * black
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * functional dropout
      
      * fixup
      
      * Fixup
      
      * style_doc
      
      * embs
      
      * shape list
      
      * delete slow force_token_id_to_be_generated func
      
      * fixup
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      566b083e
  9. 28 Oct, 2020 1 commit
  10. 26 Oct, 2020 2 commits
    • Sylvain Gugger's avatar
      Doc styling (#8067) · 08f534d2
      Sylvain Gugger authored
      * Important files
      
      * Styling them all
      
      * Revert "Styling them all"
      
      This reverts commit 7d029395fdae8513b8281cbc2a6c239f8093503e.
      
      * Syling them for realsies
      
      * Fix syntax error
      
      * Fix benchmark_utils
      
      * More fixes
      
      * Fix modeling auto and script
      
      * Remove new line
      
      * Fixes
      
      * More fixes
      
      * Fix more files
      
      * Style
      
      * Add FSMT
      
      * More fixes
      
      * More fixes
      
      * More fixes
      
      * More fixes
      
      * Fixes
      
      * More fixes
      
      * More fixes
      
      * Last fixes
      
      * Make sphinx happy
      08f534d2
    • Sylvain Gugger's avatar
      Doc fixes in preparation for the docstyle PR (#8061) · 04a17f85
      Sylvain Gugger authored
      * Fixes in preparation for doc styling
      
      * More fixes
      
      * Better syntax
      
      * Fixes
      
      * Style
      
      * More fixes
      
      * More fixes
      04a17f85
  11. 21 Oct, 2020 1 commit
    • Sam Shleifer's avatar
      Add TFBartForConditionalGeneration (#5411) · 82984215
      Sam Shleifer authored
      
      
      * half done
      
      * doc improvement
      
      * Cp test file
      
      * brokedn
      
      * broken test
      
      * undo some mess
      
      * ckpt
      
      * borked
      
      * Halfway
      
      * 6 passing
      
      * boom boom
      
      * Much progress but still 6
      
      * boom boom
      
      * merged master
      
      * 10 passing
      
      * boom boom
      
      * Style
      
      * no t5 changes
      
      * 13 passing
      
      * Integration test failing, but not gibberish
      
      * Frustrated
      
      * Merged master
      
      * 4 fail
      
      * 4 fail
      
      * fix return_dict
      
      * boom boom
      
      * Still only 4
      
      * prepare method
      
      * prepare method
      
      * before delete classif
      
      * Skip tests to avoid adding boilerplate
      
      * boom boom
      
      * fast tests passing
      
      * style
      
      * boom boom
      
      * Switch to supporting many input types
      
      * remove FIXMENORM
      
      * working
      
      * Fixed past_key_values/decoder_cached_states confusion
      
      * new broken test
      
      * Fix attention mask kwarg name
      
      * undo accidental
      
      * Style and reviewers
      
      * style
      
      * Docs and common tests
      
      * Cleaner assert messages
      
      * copy docs
      
      * style issues
      
      * Sphinx fix
      
      * Simplify caching logic
      
      * test does not require torch
      
      * copy _NoLayerEmbedTokens
      
      * Update src/transformers/modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update tests/test_modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_tf_bart.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Line length and dont document None
      
      * Add pipeline test coverage
      
      * assert msg
      
      * At parity
      
      * Assert messages
      
      * mark slow
      
      * Update compile test
      
      * back in init
      
      * Merge master
      
      * Fix tests
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      82984215
  12. 19 Oct, 2020 3 commits
    • Patrick von Platen's avatar
      fix t5 training docstring (#7911) · e3d2bee8
      Patrick von Platen authored
      e3d2bee8
    • Quentin Lhoest's avatar
      Allow Custom Dataset in RAG Retriever (#7763) · 033f29c6
      Quentin Lhoest authored
      * add CustomHFIndex
      
      * typo in config
      
      * update tests
      
      * add custom dataset example
      
      * clean script
      
      * update test data
      
      * minor in test
      
      * docs
      
      * docs
      
      * style
      
      * fix imports
      
      * allow to pass the indexed dataset directly
      
      * update tests
      
      * use multiset DPR
      
      * address thom and patrick's comments
      
      * style
      
      * update dpr tokenizer
      
      * add output_dir flag in use_own_knowledge_dataset.py
      
      * allow custom datasets in examples/rag/finetune.py
      
      * add test for custom dataset in distributed rag retriever
      033f29c6
    • Weizhen's avatar
      ProphetNet (#7157) · 2422cda0
      Weizhen authored
      
      
      * add new model prophetnet
      
      prophetnet modified
      
      modify codes as suggested v1
      
      add prophetnet test files
      
      * still bugs, because of changed output formats of encoder and decoder
      
      * move prophetnet into the latest version
      
      * clean integration tests
      
      * clean tokenizers
      
      * add xlm config to init
      
      * correct typo in init
      
      * further refactoring
      
      * continue refactor
      
      * save parallel
      
      * add decoder_attention_mask
      
      * fix use_cache vs. past_key_values
      
      * fix common tests
      
      * change decoder output logits
      
      * fix xlm tests
      
      * make common tests pass
      
      * change model architecture
      
      * add tokenizer tests
      
      * finalize model structure
      
      * no weight mapping
      
      * correct n-gram stream attention mask as discussed with qweizhen
      
      * remove unused import
      
      * fix index.rst
      
      * fix tests
      
      * delete unnecessary code
      
      * add fast integration test
      
      * rename weights
      
      * final weight remapping
      
      * save intermediate
      
      * Descriptions for Prophetnet Config File
      
      * finish all models
      
      * finish new model outputs
      
      * delete unnecessary files
      
      * refactor encoder layer
      
      * add dummy docs
      
      * code quality
      
      * fix tests
      
      * add model pages to doctree
      
      * further refactor
      
      * more refactor, more tests
      
      * finish code refactor and tests
      
      * remove unnecessary files
      
      * further clean up
      
      * add docstring template
      
      * finish tokenizer doc
      
      * finish prophetnet
      
      * fix copies
      
      * fix typos
      
      * fix tf tests
      
      * fix fp16
      
      * fix tf test 2nd try
      
      * fix code quality
      
      * add test for each model
      
      * merge new tests to branch
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update src/transformers/modeling_prophetnet.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update utils/check_repo.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * apply sams and sylvains comments
      
      * make style
      
      * remove unnecessary code
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/configuration_prophetnet.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * implement lysandres comments
      
      * correct docs
      
      * fix isort
      
      * fix tokenizers
      
      * fix copies
      Co-authored-by: default avatarweizhen <weizhen@mail.ustc.edu.cn>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2422cda0
  13. 13 Oct, 2020 1 commit
    • Felipe Curti's avatar
      Gpt1 for sequence classification (#7683) · dcba9ee0
      Felipe Curti authored
      * Add Documentation for GPT-1 Classification
      
      * Add GPT-1 with Classification head
      
      * Add tests for GPT-1 Classification
      
      * Add GPT-1 For Classification to auto models
      
      * Remove authorized missing keys, change checkpoint to openai-gpt
      dcba9ee0
  14. 09 Oct, 2020 1 commit
  15. 08 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      Adding Fast tokenizers for SentencePiece based tokenizers - Breaking: remove... · 9aeacb58
      Thomas Wolf authored
      
      Adding Fast tokenizers for SentencePiece based tokenizers - Breaking: remove Transfo-XL fast tokenizer (#7141)
      
      * [WIP] SP tokenizers
      
      * fixing tests for T5
      
      * WIP tokenizers
      
      * serialization
      
      * update T5
      
      * WIP T5 tokenization
      
      * slow to fast conversion script
      
      * Refactoring to move tokenzier implementations inside transformers
      
      * Adding gpt - refactoring - quality
      
      * WIP adding several tokenizers to the fast world
      
      * WIP Roberta - moving implementations
      
      * update to dev4 switch file loading to in-memory loading
      
      * Updating and fixing
      
      * advancing on the tokenizers - updating do_lower_case
      
      * style and quality
      
      * moving forward with tokenizers conversion and tests
      
      * MBart, T5
      
      * dumping the fast version of transformer XL
      
      * Adding to autotokenizers + style/quality
      
      * update init and space_between_special_tokens
      
      * style and quality
      
      * bump up tokenizers version
      
      * add protobuf
      
      * fix pickle Bert JP with Mecab
      
      * fix newly added tokenizers
      
      * style and quality
      
      * fix bert japanese
      
      * fix funnel
      
      * limite tokenizer warning to one occurence
      
      * clean up file
      
      * fix new tokenizers
      
      * fast tokenizers deep tests
      
      * WIP adding all the special fast tests on the new fast tokenizers
      
      * quick fix
      
      * adding more fast tokenizers in the fast tests
      
      * all tokenizers in fast version tested
      
      * Adding BertGenerationFast
      
      * bump up setup.py for CI
      
      * remove BertGenerationFast (too early)
      
      * bump up tokenizers version
      
      * Clean old docstrings
      
      * Typo
      
      * Update following Lysandre comments
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      9aeacb58
  16. 07 Oct, 2020 1 commit
  17. 06 Oct, 2020 2 commits
  18. 05 Oct, 2020 2 commits
    • Forrest Iandola's avatar
      SqueezeBERT architecture (#7083) · 02ef825b
      Forrest Iandola authored
      * configuration_squeezebert.py
      
      thin wrapper around bert tokenizer
      
      fix typos
      
      wip sb model code
      
      wip modeling_squeezebert.py. Next step is to get the multi-layer-output interface working
      
      set up squeezebert to use BertModelOutput when returning results.
      
      squeezebert documentation
      
      formatting
      
      allow head mask that is an array of [None, ..., None]
      
      docs
      
      docs cont'd
      
      path to vocab
      
      docs and pointers to cloud files (WIP)
      
      line length and indentation
      
      squeezebert model cards
      
      formatting of model cards
      
      untrack modeling_squeezebert_scratchpad.py
      
      update aws paths to vocab and config files
      
      get rid of stub of NSP code, and advise users to pretrain with mlm only
      
      fix rebase issues
      
      redo rebase of modeling_auto.py
      
      fix issues with code formatting
      
      more code format auto-fixes
      
      move squeezebert before bert in tokenization_auto.py and modeling_auto.py because squeezebert inherits from bert
      
      tests for squeezebert modeling and tokenization
      
      fix typo
      
      move squeezebert before bert in modeling_auto.py to fix inheritance problem
      
      disable test_head_masking, since squeezebert doesn't yet implement head masking
      
      fix issues exposed by the test_modeling_squeezebert.py
      
      fix an issue exposed by test_tokenization_squeezebert.py
      
      fix issue exposed by test_modeling_squeezebert.py
      
      auto generated code style improvement
      
      issue that we inherited from modeling_xxx.py: SqueezeBertForMaskedLM.forward() calls self.cls(), but there is no self.cls, and I think the goal was actually to call self.lm_head()
      
      update copyright
      
      resolve failing 'test_hidden_states_output' and remove unused encoder_hidden_states and encoder_attention_mask
      
      docs
      
      add integration test. rename squeezebert-mnli --> squeezebert/squeezebert-mnli
      
      autogenerated formatting tweaks
      
      integrate feedback from patrickvonplaten and sgugger to programming style and documentation strings
      
      * tiny change to order of imports
      02ef825b
    • Sylvain Gugger's avatar
      Cleanup documentation for BART, Marian, MBART and Pegasus (#7523) · e2c935f5
      Sylvain Gugger authored
      * Cleanup documentation for BART, Marian, MBART and Pegasus
      
      * Cleanup documentation for BART, Marian, MBART and Pegasus
      e2c935f5
  19. 01 Oct, 2020 1 commit
  20. 30 Sep, 2020 1 commit
  21. 28 Sep, 2020 1 commit
  22. 24 Sep, 2020 2 commits
  23. 23 Sep, 2020 1 commit
  24. 22 Sep, 2020 2 commits
    • Ola Piktus's avatar
      RAG (#6813) · c754c41c
      Ola Piktus authored
      * added rag WIP
      
      * path fix
      
      * Formatting / renaming prior to actual work
      
      * added rag WIP
      
      * path fix
      
      * Formatting / renaming prior to actual work
      
      * added rag WIP
      
      * path fix
      
      * Formatting / renaming prior to actual work
      
      * added rag WIP
      
      * Formatting / renaming prior to actual work
      
      * First commit
      
      * improve comments
      
      * Retrieval evaluation scripts
      
      * refactor to include modeling outputs + MPI retriever
      
      * Fix rag-token model + refactor
      
      * Various fixes + finetuning logic
      
      * use_bos fix
      
      * Retrieval refactor
      
      * Finetuning refactoring and cleanup
      
      * Add documentation and cleanup
      
      * Remove set_up_rag_env.sh file
      
      * Fix retrieval wit HF index
      
      * Fix import errors
      
      * Fix quality errors
      
      * Refactor as per suggestions in https://github.com/huggingface/transformers/pull/6813#issuecomment-687208867
      
      
      
      * fix quality
      
      * Fix RAG Sequence generation
      
      * minor cleanup plus initial tests
      
      * fix test
      
      * fix tests 2
      
      * Comments fix
      
      * post-merge fixes
      
      * Improve readme + post-rebase refactor
      
      * Extra dependencied for tests
      
      * Fix tests
      
      * Fix tests 2
      
      * Refactor test requirements
      
      * Fix tests 3
      
      * Post-rebase refactor
      
      * rename nlp->datasets
      
      * RAG integration tests
      
      * add tokenizer to slow integration test and allow retriever to run on cpu
      
      * add tests; fix position ids warning
      
      * change structure
      
      * change structure
      
      * add from encoder generator
      
      * save working solution
      
      * make all integration tests pass
      
      * add RagTokenizer.save/from_pretrained and RagRetriever.save/from_pretrained
      
      * don't save paths
      
      * delete unnecessary imports
      
      * pass config to AutoTokenizer.from_pretrained for Rag tokenizers
      
      * init wiki_dpr only once
      
      * hardcode legacy index and passages paths (todo: add the right urls)
      
      * finalize config
      
      * finalize retriver api and config api
      
      * LegacyIndex index download refactor
      
      * add dpr to autotokenizer
      
      * make from pretrained more flexible
      
      * fix ragfortokengeneration
      
      * small name changes in tokenizer
      
      * add labels to models
      
      * change default index name
      
      * add retrieval tests
      
      * finish token generate
      
      * align test with previous version and make all tests pass
      
      * add tests
      
      * finalize tests
      
      * implement thoms suggestions
      
      * add first version of test
      
      * make first tests work
      
      * make retriever platform agnostic
      
      * naming
      
      * style
      
      * add legacy index URL
      
      * docstrings + simple retrieval test for distributed
      
      * clean model api
      
      * add doc_ids to retriever's outputs
      
      * fix retrieval tests
      
      * finish model outputs
      
      * finalize model api
      
      * fix generate problem for rag
      
      * fix generate for other modles
      
      * fix some tests
      
      * save intermediate
      
      * set generate to default
      
      * big refactor generate
      
      * delete rag_api
      
      * correct pip faiss install
      
      * fix auto tokenization test
      
      * fix faiss install
      
      * fix test
      
      * move the distributed logic to examples
      
      * model page
      
      * docs
      
      * finish tests
      
      * fix dependencies
      
      * fix import in __init__
      
      * Refactor eval_rag and finetune scripts
      
      * start docstring
      
      * add psutil to test
      
      * fix tf test
      
      * move require torch to top
      
      * fix retrieval test
      
      * align naming
      
      * finish automodel
      
      * fix repo consistency
      
      * test ragtokenizer save/load
      
      * add rag model output docs
      
      * fix ragtokenizer save/load from pretrained
      
      * fix tokenizer dir
      
      * remove torch in retrieval
      
      * fix docs
      
      * fixe finetune scripts
      
      * finish model docs
      
      * finish docs
      
      * remove auto model for now
      
      * add require torch
      
      * remove solved todos
      
      * integrate sylvains suggestions
      
      * sams comments
      
      * correct mistake on purpose
      
      * improve README
      
      * Add generation test cases
      
      * fix rag token
      
      * clean token generate
      
      * fix test
      
      * add note to test
      
      * fix attention mask
      
      * add t5 test for rag
      
      * Fix handling prefix in finetune.py
      
      * don't overwrite index_name
      Co-authored-by: default avatarPatrick Lewis <plewis@fb.com>
      Co-authored-by: default avatarAleksandra Piktus <piktus@devfair0141.h2.fair>
      Co-authored-by: default avatarAleksandra Piktus <piktus@learnfair5102.h2.fair>
      Co-authored-by: default avatarAleksandra Piktus <piktus@learnfair5067.h2.fair>
      Co-authored-by: default avatarYour Name <you@example.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarQuentin Lhoest <lhoest.q@gmail.com>
      c754c41c
    • Minghao Li's avatar
      Add LayoutLM Model (#7064) · cd9a0585
      Minghao Li authored
      
      
      * first version
      
      * finish test docs readme model/config/tokenization class
      
      * apply make style and make quality
      
      * fix layoutlm GitHub link
      
      * fix conflict in index.rst and add layoutlm to pretrained_models.rst
      
      * fix bug in test_parents_and_children_in_mappings
      
      * reformat modeling_auto.py and tokenization_auto.py
      
      * fix bug in test_modeling_layoutlm.py
      
      * Update docs/source/model_doc/layoutlm.rst
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update docs/source/model_doc/layoutlm.rst
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * remove inh, add tokenizer fast, and update some doc
      
      * copy and rename necessary class from modeling_bert to modeling_layoutlm
      
      * Update src/transformers/configuration_layoutlm.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/configuration_layoutlm.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/configuration_layoutlm.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/configuration_layoutlm.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_layoutlm.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_layoutlm.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Update src/transformers/modeling_layoutlm.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * add mish to activations.py, import ACT2FN and import logging from utils
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      cd9a0585
  25. 17 Sep, 2020 1 commit
    • Stas Bekman's avatar
      [ported model] FSMT (FairSeq MachineTranslation) (#6940) · 1eeb206b
      Stas Bekman authored
      * ready for PR
      
      * cleanup
      
      * correct FSMT_PRETRAINED_MODEL_ARCHIVE_LIST
      
      * fix
      
      * perfectionism
      
      * revert change from another PR
      
      * odd, already committed this one
      
      * non-interactive upload workaround
      
      * backup the failed experiment
      
      * store langs in config
      
      * workaround for localizing model path
      
      * doc clean up as in https://github.com/huggingface/transformers/pull/6956
      
      
      
      * style
      
      * back out debug mode
      
      * document: run_eval.py --num_beams 10
      
      * remove unneeded constant
      
      * typo
      
      * re-use bart's Attention
      
      * re-use EncoderLayer, DecoderLayer from bart
      
      * refactor
      
      * send to cuda and fp16
      
      * cleanup
      
      * revert (moved to another PR)
      
      * better error message
      
      * document run_eval --num_beams
      
      * solve the problem of tokenizer finding the right files when model is local
      
      * polish, remove hardcoded config
      
      * add a note that the file is autogenerated to avoid losing changes
      
      * prep for org change, remove unneeded code
      
      * switch to model4.pt, update scores
      
      * s/python/bash/
      
      * missing init (but doesn't impact the finetuned model)
      
      * cleanup
      
      * major refactor (reuse-bart)
      
      * new model, new expected weights
      
      * cleanup
      
      * cleanup
      
      * full link
      
      * fix model type
      
      * merge porting notes
      
      * style
      
      * cleanup
      
      * have to create a DecoderConfig object to handle vocab_size properly
      
      * doc fix
      
      * add note (not a public class)
      
      * parametrize
      
      * - add bleu scores integration tests
      
      * skip test if sacrebleu is not installed
      
      * cache heavy models/tokenizers
      
      * some tweaks
      
      * remove tokens that aren't used
      
      * more purging
      
      * simplify code
      
      * switch to using decoder_start_token_id
      
      * add doc
      
      * Revert "major refactor (reuse-bart)"
      
      This reverts commit 226dad15ca6a9ef4e26178526e878e8fc5c85874.
      
      * decouple from bart
      
      * remove unused code #1
      
      * remove unused code #2
      
      * remove unused code #3
      
      * update instructions
      
      * clean up
      
      * move bleu eval to examples
      
      * check import only once
      
      * move data+gen script into files
      
      * reuse via import
      
      * take less space
      
      * add prepare_seq2seq_batch (auto-tested)
      
      * cleanup
      
      * recode test to use json instead of yaml
      
      * ignore keys not needed
      
      * use the new -y in transformers-cli upload -y
      
      * [xlm tok] config dict: fix str into int to match definition (#7034)
      
      * [s2s] --eval_max_generate_length (#7018)
      
      * Fix CI with change of name of nlp (#7054)
      
      * nlp -> datasets
      
      * More nlp -> datasets
      
      * Woopsie
      
      * More nlp -> datasets
      
      * One last
      
      * extending to support allen_nlp wmt models
      
      - allow a specific checkpoint file to be passed
      - more arg settings
      - scripts for allen_nlp models
      
      * sync with changes
      
      * s/fsmt-wmt/wmt/ in model names
      
      * s/fsmt-wmt/wmt/ in model names (p2)
      
      * s/fsmt-wmt/wmt/ in model names (p3)
      
      * switch to a better checkpoint
      
      * typo
      
      * make non-optional args such - adjust tests where possible or skip when there is no other choice
      
      * consistency
      
      * style
      
      * adjust header
      
      * cards moved (model rename)
      
      * use best custom hparams
      
      * update info
      
      * remove old cards
      
      * cleanup
      
      * s/stas/facebook/
      
      * update scores
      
      * s/allen_nlp/allenai/
      
      * url maps aren't needed
      
      * typo
      
      * move all the doc / build /eval generators to their own scripts
      
      * cleanup
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * fix indent
      
      * duplicated line
      
      * style
      
      * use the correct add_start_docstrings
      
      * oops
      
      * resizing can't be done with the core approach, due to 2 dicts
      
      * check that the arg is a list
      
      * style
      
      * style
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      1eeb206b
  26. 14 Sep, 2020 3 commits