1. 19 Oct, 2020 1 commit
    • Weizhen's avatar
      ProphetNet (#7157) · 2422cda0
      Weizhen authored
      
      
      * add new model prophetnet
      
      prophetnet modified
      
      modify codes as suggested v1
      
      add prophetnet test files
      
      * still bugs, because of changed output formats of encoder and decoder
      
      * move prophetnet into the latest version
      
      * clean integration tests
      
      * clean tokenizers
      
      * add xlm config to init
      
      * correct typo in init
      
      * further refactoring
      
      * continue refactor
      
      * save parallel
      
      * add decoder_attention_mask
      
      * fix use_cache vs. past_key_values
      
      * fix common tests
      
      * change decoder output logits
      
      * fix xlm tests
      
      * make common tests pass
      
      * change model architecture
      
      * add tokenizer tests
      
      * finalize model structure
      
      * no weight mapping
      
      * correct n-gram stream attention mask as discussed with qweizhen
      
      * remove unused import
      
      * fix index.rst
      
      * fix tests
      
      * delete unnecessary code
      
      * add fast integration test
      
      * rename weights
      
      * final weight remapping
      
      * save intermediate
      
      * Descriptions for Prophetnet Config File
      
      * finish all models
      
      * finish new model outputs
      
      * delete unnecessary files
      
      * refactor encoder layer
      
      * add dummy docs
      
      * code quality
      
      * fix tests
      
      * add model pages to doctree
      
      * further refactor
      
      * more refactor, more tests
      
      * finish code refactor and tests
      
      * remove unnecessary files
      
      * further clean up
      
      * add docstring template
      
      * finish tokenizer doc
      
      * finish prophetnet
      
      * fix copies
      
      * fix typos
      
      * fix tf tests
      
      * fix fp16
      
      * fix tf test 2nd try
      
      * fix code quality
      
      * add test for each model
      
      * merge new tests to branch
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update src/transformers/modeling_prophetnet.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update utils/check_repo.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * apply sams and sylvains comments
      
      * make style
      
      * remove unnecessary code
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/configuration_prophetnet.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * implement lysandres comments
      
      * correct docs
      
      * fix isort
      
      * fix tokenizers
      
      * fix copies
      Co-authored-by: default avatarweizhen <weizhen@mail.ustc.edu.cn>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2422cda0
  2. 18 Oct, 2020 1 commit
    • Thomas Wolf's avatar
      [Dependencies|tokenizers] Make both SentencePiece and Tokenizers optional dependencies (#7659) · ba8c4d0a
      Thomas Wolf authored
      * splitting fast and slow tokenizers [WIP]
      
      * [WIP] splitting sentencepiece and tokenizers dependencies
      
      * update dummy objects
      
      * add name_or_path to models and tokenizers
      
      * prefix added to file names
      
      * prefix
      
      * styling + quality
      
      * spliting all the tokenizer files - sorting sentencepiece based ones
      
      * update tokenizer version up to 0.9.0
      
      * remove hard dependency on sentencepiece 🎉
      
      * and removed hard dependency on tokenizers 🎉
      
      
      
      * update conversion script
      
      * update missing models
      
      * fixing tests
      
      * move test_tokenization_fast to main tokenization tests - fix bugs
      
      * bump up tokenizers
      
      * fix bert_generation
      
      * update ad fix several tokenizers
      
      * keep sentencepiece in deps for now
      
      * fix funnel and deberta tests
      
      * fix fsmt
      
      * fix marian tests
      
      * fix layoutlm
      
      * fix squeezebert and gpt2
      
      * fix T5 tokenization
      
      * fix xlnet tests
      
      * style
      
      * fix mbart
      
      * bump up tokenizers to 0.9.2
      
      * fix model tests
      
      * fix tf models
      
      * fix seq2seq examples
      
      * fix tests without sentencepiece
      
      * fix slow => fast  conversion without sentencepiece
      
      * update auto and bert generation tests
      
      * fix mbart tests
      
      * fix auto and common test without tokenizers
      
      * fix tests without tokenizers
      
      * clean up tests lighten up when tokenizers + sentencepiece are both off
      
      * style quality and tests fixing
      
      * add sentencepiece to doc/examples reqs
      
      * leave sentencepiece on for now
      
      * style quality split hebert and fix pegasus
      
      * WIP Herbert fast
      
      * add sample_text_no_unicode and fix hebert tokenization
      
      * skip FSMT example test for now
      
      * fix style
      
      * fix fsmt in example tests
      
      * update following Lysandre and Sylvain's comments
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/testing_utils.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/tokenization_utils_base.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      ba8c4d0a
  3. 01 Oct, 2020 1 commit
  4. 28 Sep, 2020 1 commit
  5. 22 Sep, 2020 1 commit
    • Ola Piktus's avatar
      RAG (#6813) · c754c41c
      Ola Piktus authored
      * added rag WIP
      
      * path fix
      
      * Formatting / renaming prior to actual work
      
      * added rag WIP
      
      * path fix
      
      * Formatting / renaming prior to actual work
      
      * added rag WIP
      
      * path fix
      
      * Formatting / renaming prior to actual work
      
      * added rag WIP
      
      * Formatting / renaming prior to actual work
      
      * First commit
      
      * improve comments
      
      * Retrieval evaluation scripts
      
      * refactor to include modeling outputs + MPI retriever
      
      * Fix rag-token model + refactor
      
      * Various fixes + finetuning logic
      
      * use_bos fix
      
      * Retrieval refactor
      
      * Finetuning refactoring and cleanup
      
      * Add documentation and cleanup
      
      * Remove set_up_rag_env.sh file
      
      * Fix retrieval wit HF index
      
      * Fix import errors
      
      * Fix quality errors
      
      * Refactor as per suggestions in https://github.com/huggingface/transformers/pull/6813#issuecomment-687208867
      
      
      
      * fix quality
      
      * Fix RAG Sequence generation
      
      * minor cleanup plus initial tests
      
      * fix test
      
      * fix tests 2
      
      * Comments fix
      
      * post-merge fixes
      
      * Improve readme + post-rebase refactor
      
      * Extra dependencied for tests
      
      * Fix tests
      
      * Fix tests 2
      
      * Refactor test requirements
      
      * Fix tests 3
      
      * Post-rebase refactor
      
      * rename nlp->datasets
      
      * RAG integration tests
      
      * add tokenizer to slow integration test and allow retriever to run on cpu
      
      * add tests; fix position ids warning
      
      * change structure
      
      * change structure
      
      * add from encoder generator
      
      * save working solution
      
      * make all integration tests pass
      
      * add RagTokenizer.save/from_pretrained and RagRetriever.save/from_pretrained
      
      * don't save paths
      
      * delete unnecessary imports
      
      * pass config to AutoTokenizer.from_pretrained for Rag tokenizers
      
      * init wiki_dpr only once
      
      * hardcode legacy index and passages paths (todo: add the right urls)
      
      * finalize config
      
      * finalize retriver api and config api
      
      * LegacyIndex index download refactor
      
      * add dpr to autotokenizer
      
      * make from pretrained more flexible
      
      * fix ragfortokengeneration
      
      * small name changes in tokenizer
      
      * add labels to models
      
      * change default index name
      
      * add retrieval tests
      
      * finish token generate
      
      * align test with previous version and make all tests pass
      
      * add tests
      
      * finalize tests
      
      * implement thoms suggestions
      
      * add first version of test
      
      * make first tests work
      
      * make retriever platform agnostic
      
      * naming
      
      * style
      
      * add legacy index URL
      
      * docstrings + simple retrieval test for distributed
      
      * clean model api
      
      * add doc_ids to retriever's outputs
      
      * fix retrieval tests
      
      * finish model outputs
      
      * finalize model api
      
      * fix generate problem for rag
      
      * fix generate for other modles
      
      * fix some tests
      
      * save intermediate
      
      * set generate to default
      
      * big refactor generate
      
      * delete rag_api
      
      * correct pip faiss install
      
      * fix auto tokenization test
      
      * fix faiss install
      
      * fix test
      
      * move the distributed logic to examples
      
      * model page
      
      * docs
      
      * finish tests
      
      * fix dependencies
      
      * fix import in __init__
      
      * Refactor eval_rag and finetune scripts
      
      * start docstring
      
      * add psutil to test
      
      * fix tf test
      
      * move require torch to top
      
      * fix retrieval test
      
      * align naming
      
      * finish automodel
      
      * fix repo consistency
      
      * test ragtokenizer save/load
      
      * add rag model output docs
      
      * fix ragtokenizer save/load from pretrained
      
      * fix tokenizer dir
      
      * remove torch in retrieval
      
      * fix docs
      
      * fixe finetune scripts
      
      * finish model docs
      
      * finish docs
      
      * remove auto model for now
      
      * add require torch
      
      * remove solved todos
      
      * integrate sylvains suggestions
      
      * sams comments
      
      * correct mistake on purpose
      
      * improve README
      
      * Add generation test cases
      
      * fix rag token
      
      * clean token generate
      
      * fix test
      
      * add note to test
      
      * fix attention mask
      
      * add t5 test for rag
      
      * Fix handling prefix in finetune.py
      
      * don't overwrite index_name
      Co-authored-by: default avatarPatrick Lewis <plewis@fb.com>
      Co-authored-by: default avatarAleksandra Piktus <piktus@devfair0141.h2.fair>
      Co-authored-by: default avatarAleksandra Piktus <piktus@learnfair5102.h2.fair>
      Co-authored-by: default avatarAleksandra Piktus <piktus@learnfair5067.h2.fair>
      Co-authored-by: default avatarYour Name <you@example.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarQuentin Lhoest <lhoest.q@gmail.com>
      c754c41c
  6. 01 Sep, 2020 1 commit
    • Patrick von Platen's avatar
      [Generate] Facilitate PyTorch generate using `ModelOutputs` (#6735) · afc4ece4
      Patrick von Platen authored
      * fix generate for GPT2 Double Head
      
      * fix gpt2 double head model
      
      * fix  bart / t5
      
      * also add for no beam search
      
      * fix no beam search
      
      * fix encoder decoder
      
      * simplify t5
      
      * simplify t5
      
      * fix t5 tests
      
      * fix BART
      
      * fix transfo-xl
      
      * fix conflict
      
      * integrating sylvains and sams comments
      
      * fix tf past_decoder_key_values
      
      * fix enc dec test
      afc4ece4
  7. 26 Aug, 2020 1 commit
  8. 25 Aug, 2020 1 commit
  9. 24 Aug, 2020 1 commit
  10. 19 Aug, 2020 1 commit
  11. 13 Aug, 2020 2 commits
    • Lysandre Debut's avatar
      Test model outputs equivalence (#6445) · f7cbc13d
      Lysandre Debut authored
      * Test model outputs equivalence
      
      * Fix failing tests
      
      * From dict to kwargs
      
      * DistilBERT
      
      * Addressing @sgugger and @patrickvonplaten's comments
      f7cbc13d
    • Stas Bekman's avatar
      cleanup tf unittests: part 2 (#6260) · e983da0e
      Stas Bekman authored
      * cleanup torch unittests: part 2
      
      * remove trailing comma added by isort, and which breaks flake
      
      * one more comma
      
      * revert odd balls
      
      * part 3: odd cases
      
      * more ["key"] -> .key refactoring
      
      * .numpy() is not needed
      
      * more unncessary .numpy() removed
      
      * more simplification
      e983da0e
  12. 04 Aug, 2020 1 commit
  13. 31 Jul, 2020 1 commit
  14. 30 Jul, 2020 1 commit
    • Sylvain Gugger's avatar
      Switch from return_tuple to return_dict (#6138) · 91cb9546
      Sylvain Gugger authored
      
      
      * Switch from return_tuple to return_dict
      
      * Fix test
      
      * [WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC… (#5614)
      
      * Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleChoice} models and tests
      
      * AutoModels
      
      
      Tiny tweaks
      
      * Style
      
      * Final changes before merge
      
      * Re-order for simpler review
      
      * Final fixes
      
      * Addressing @sgugger's comments
      
      * Test MultipleChoice
      
      * Rework TF trainer (#6038)
      
      * Fully rework training/prediction loops
      
      * fix method name
      
      * Fix variable name
      
      * Fix property name
      
      * Fix scope
      
      * Fix method name
      
      * Fix tuple index
      
      * Fix tuple index
      
      * Fix indentation
      
      * Fix variable name
      
      * fix eval before log
      
      * Add drop remainder for test dataset
      
      * Fix step number + fix logging datetime
      
      * fix eval loss value
      
      * use global step instead of step + fix logging at step 0
      
      * Fix logging datetime
      
      * Fix global_step usage
      
      * Fix breaking loop + logging datetime
      
      * Fix step in prediction loop
      
      * Fix step breaking
      
      * Fix train/test loops
      
      * Force TF at least 2.2 for the trainer
      
      * Use assert_cardinality to facilitate the dataset size computation
      
      * Log steps per epoch
      
      * Make tfds compliant with TPU
      
      * Make tfds compliant with TPU
      
      * Use TF dataset enumerate instead of the Python one
      
      * revert previous commit
      
      * Fix data_dir
      
      * Apply style
      
      * rebase on master
      
      * Address Sylvain's comments
      
      * Address Sylvain's and Lysandre comments
      
      * Trigger CI
      
      * Remove unused import
      
      * Switch from return_tuple to return_dict
      
      * Fix test
      
      * Add recent model
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarJulien Plu <plu.julien@gmail.com>
      91cb9546
  15. 15 Jul, 2020 3 commits
  16. 10 Jul, 2020 1 commit
    • Sylvain Gugger's avatar
      Change model outputs types to self-document outputs (#5438) · edfd82f5
      Sylvain Gugger authored
      * [WIP] Proposal for model outputs
      
      * All Bert models
      
      * Make CI green maybe?
      
      * Fix ONNX test
      
      * Isolate ModelOutput from pt and tf
      
      * Formatting
      
      * Add Electra models
      
      * Auto-generate docstrings from outputs
      
      * Add TF outputs
      
      * Add some BERT models
      
      * Revert TF side
      
      * Remove last traces of TF changes
      
      * Fail with a clear error message
      
      * Add Albert and work through Bart
      
      * Add CTRL and DistilBert
      
      * Formatting
      
      * Progress on Bart
      
      * Renames and finish Bart
      
      * Formatting
      
      * Fix last test
      
      * Add DPR
      
      * Finish Electra and add FlauBERT
      
      * Add GPT2
      
      * Add Longformer
      
      * Add MMBT
      
      * Add MobileBert
      
      * Add GPT
      
      * Formatting
      
      * Add Reformer
      
      * Add Roberta
      
      * Add T5
      
      * Add Transformer XL
      
      * Fix test
      
      * Add XLM + fix XLMForTokenClassification
      
      * Style + XLMRoberta
      
      * Add XLNet
      
      * Formatting
      
      * Add doc of return_tuple arg
      edfd82f5
  17. 07 Jul, 2020 1 commit
    • Abel's avatar
      Make T5 compatible with ONNX (#5518) · 69122657
      Abel authored
      
      
      * Default decoder inputs to encoder ones for T5 if neither are specified.
      
      * Fixing typo, now all tests are passing.
      
      * Changing einsum to operations supported by onnx
      
      * Adding a test to ensure T5 can be exported to onnx op>9
      
      * Modified test for onnx export to make it faster
      
      * Styling changes.
      
      * Styling changes.
      
      * Changing notation for matrix multiplication
      Co-authored-by: default avatarAbel Riboulot <tkai@protomail.com>
      69122657
  18. 01 Jul, 2020 1 commit
  19. 26 Jun, 2020 1 commit
  20. 24 Jun, 2020 1 commit
  21. 16 Jun, 2020 1 commit
  22. 05 Jun, 2020 1 commit
  23. 02 Jun, 2020 1 commit
    • Julien Chaumond's avatar
      Kill model archive maps (#4636) · d4c2cb40
      Julien Chaumond authored
      * Kill model archive maps
      
      * Fixup
      
      * Also kill model_archive_map for MaskedBertPreTrainedModel
      
      * Unhook config_archive_map
      
      * Tokenizers: align with model id changes
      
      * make style && make quality
      
      * Fix CI
      d4c2cb40
  24. 19 May, 2020 1 commit
  25. 18 May, 2020 1 commit
  26. 01 May, 2020 1 commit
    • Julien Chaumond's avatar
      [ci] Load pretrained models into the default (long-lived) cache · f54dc3f4
      Julien Chaumond authored
      There's an inconsistency right now where:
      - we load some models into CACHE_DIR
      - and some models in the default cache
      - and often, in both for the same models
      
      When running the RUN_SLOW tests, this takes a lot of disk space, time, and bandwidth.
      
      I'd rather always use the default cache
      f54dc3f4
  27. 17 Apr, 2020 1 commit
  28. 16 Apr, 2020 1 commit
    • Patrick von Platen's avatar
      [TFT5, Cache] Add cache to TFT5 (#3772) · 38f7461d
      Patrick von Platen authored
      * correct gpt2 test inputs
      
      * make style
      
      * delete modeling_gpt2 change in test file
      
      * translate from pytorch
      
      * correct tests
      
      * fix conflicts
      
      * fix conflicts
      
      * fix conflicts
      
      * fix conflicts
      
      * make tensorflow t5 caching work
      
      * make style
      
      * clean reorder cache
      
      * remove unnecessary spaces
      
      * fix test
      38f7461d
  29. 14 Apr, 2020 1 commit
  30. 09 Apr, 2020 1 commit
  31. 01 Apr, 2020 1 commit
  32. 30 Mar, 2020 1 commit
    • Patrick von Platen's avatar
      [T5] make decoder input ids optional for t5 training (#3521) · 75ec6c9e
      Patrick von Platen authored
      * make decoder input ids optional for t5 training
      
      * lm_lables should not be shifted in t5
      
      * add tests
      
      * finish shift right functionality for PT T5
      
      * move shift right to correct class
      
      * cleaner code
      
      * replace -100 values with pad token id
      
      * add assert statement
      
      * remove unnecessary for loop
      
      * make style
      75ec6c9e
  33. 19 Mar, 2020 1 commit
    • Patrick von Platen's avatar
      Support T5 Generation (#3228) · bbf26c4e
      Patrick von Platen authored
      
      
      * fix conflicts
      
      * update bart max length test
      
      * correct spelling mistakes
      
      * implemented model specific encode function
      
      * fix merge conflicts
      
      * better naming
      
      * save intermediate state -> need to rethink strucuture a bit
      
      * leave tf problem as it is for now
      
      * current version
      
      * add layers.pop
      
      * remove ipdb
      
      * make style
      
      * clean return cut decoding
      
      * remove ipdbs
      
      * Fix restoring layers in the decoders that doesnt exists.
      
      * push good intermediate solution for now
      
      * fix conflicts
      
      * always good to refuse to merge conflicts when rebasing
      
      * fix small bug
      
      * improve function calls
      
      * remove unused file
      
      * add correct scope behavior for t5_generate
      Co-authored-by: default avatarMorgan Funtowicz <funtowiczmo@gmail.com>
      bbf26c4e
  34. 26 Feb, 2020 1 commit
  35. 20 Feb, 2020 1 commit
    • Sam Shleifer's avatar
      New BartModel (#2745) · 53ce3854
      Sam Shleifer authored
      * Results same as fairseq
      * Wrote a ton of tests
      * Struggled with api signatures
      * added some docs
      
      53ce3854
  36. 06 Jan, 2020 2 commits