"src/git@developer.sourcefind.cn:gaoqiong/migraphx.git" did not exist on "b0dc015ca60e60b65103fff56ad3d98398e4b7f0"
  1. 02 Dec, 2020 1 commit
    • Patrick von Platen's avatar
      [PyTorch] Refactor Resize Token Embeddings (#8880) · 443f67e8
      Patrick von Platen authored
      * fix resize tokens
      
      * correct mobile_bert
      
      * move embedding fix into modeling_utils.py
      
      * refactor
      
      * fix lm head resize
      
      * refactor
      
      * break lines to make sylvain happy
      
      * add news tests
      
      * fix typo
      
      * improve test
      
      * skip bart-like for now
      
      * check if base_model = get(...) is necessary
      
      * clean files
      
      * improve test
      
      * fix tests
      
      * revert style templates
      
      * Update templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/modeling_{{cookiecutter.lowercase_modelname}}.py
      443f67e8
  2. 27 Nov, 2020 2 commits
  3. 25 Nov, 2020 1 commit
  4. 23 Nov, 2020 2 commits
  5. 16 Nov, 2020 1 commit
    • Sylvain Gugger's avatar
      Switch `return_dict` to `True` by default. (#8530) · 1073a2bd
      Sylvain Gugger authored
      * Use the CI to identify failing tests
      
      * Remove from all examples and tests
      
      * More default switch
      
      * Fixes
      
      * More test fixes
      
      * More fixes
      
      * Last fixes hopefully
      
      * Use the CI to identify failing tests
      
      * Remove from all examples and tests
      
      * More default switch
      
      * Fixes
      
      * More test fixes
      
      * More fixes
      
      * Last fixes hopefully
      
      * Run on the real suite
      
      * Fix slow tests
      1073a2bd
  6. 13 Nov, 2020 1 commit
  7. 10 Nov, 2020 1 commit
  8. 09 Nov, 2020 1 commit
  9. 06 Nov, 2020 1 commit
  10. 05 Nov, 2020 1 commit
  11. 03 Nov, 2020 1 commit
    • Patrick von Platen's avatar
      Refactoring the generate() function (#6949) · a1bbcf3f
      Patrick von Platen authored
      * first draft
      
      * show design proposition for new generate method
      
      * up
      
      * make better readable
      
      * make first version
      
      * gpt2 tests pass
      
      * make beam search for gpt2 work
      
      * add first encoder-decoder code
      
      * delete typo
      
      * make t5 work
      
      * save indermediate
      
      * make bart work with beam search
      
      * finish beam search bart / t5
      
      * add default kwargs
      
      * make more tests pass
      
      * fix no bad words sampler
      
      * some fixes and tests for all distribution processors
      
      * fix test
      
      * fix rag slow tests
      
      * merge to master
      
      * add nograd to generate
      
      * make all slow tests pass
      
      * speed up generate
      
      * fix edge case bug
      
      * small fix
      
      * correct typo
      
      * add type hints and docstrings
      
      * fix typos in tests
      
      * add beam search tests
      
      * add tests for beam scorer
      
      * fix test rag
      
      * finish beam search tests
      
      * move generation tests in seperate file
      
      * fix generation tests
      
      * more tests
      
      * add aggressive generation tests
      
      * fix tests
      
      * add gpt2 sample test
      
      * add more docstring
      
      * add more docs
      
      * finish doc strings
      
      * apply some more of sylvains and sams comments
      
      * fix some typos
      
      * make fix copies
      
      * apply lysandres and sylvains comments
      
      * final corrections on examples
      
      * small fix for reformer
      a1bbcf3f
  12. 30 Oct, 2020 1 commit
    • Lysandre Debut's avatar
      Ci test tf super slow (#8007) · 10f8c636
      Lysandre Debut authored
      * Test TF GPU CI
      
      * Change cache
      
      * Fix missing torch requirement
      
      * Fix some model tests
      
      
      Style
      
      * LXMERT
      
      * MobileBERT
      
      * Longformer skip test
      
      * XLNet
      
      * The rest of the tests
      
      * RAG goes OOM in multi gpu setup
      
      * YAML test files
      
      * Last fixes
      
      * Skip doctests
      
      * Fill mask tests
      
      * Yaml files
      
      * Last test fix
      
      * Style
      
      * Update cache
      
      * Change ONNX tests to slow + use tiny model
      10f8c636
  13. 29 Oct, 2020 1 commit
  14. 21 Oct, 2020 1 commit
  15. 20 Oct, 2020 1 commit
  16. 19 Oct, 2020 1 commit
    • Weizhen's avatar
      ProphetNet (#7157) · 2422cda0
      Weizhen authored
      
      
      * add new model prophetnet
      
      prophetnet modified
      
      modify codes as suggested v1
      
      add prophetnet test files
      
      * still bugs, because of changed output formats of encoder and decoder
      
      * move prophetnet into the latest version
      
      * clean integration tests
      
      * clean tokenizers
      
      * add xlm config to init
      
      * correct typo in init
      
      * further refactoring
      
      * continue refactor
      
      * save parallel
      
      * add decoder_attention_mask
      
      * fix use_cache vs. past_key_values
      
      * fix common tests
      
      * change decoder output logits
      
      * fix xlm tests
      
      * make common tests pass
      
      * change model architecture
      
      * add tokenizer tests
      
      * finalize model structure
      
      * no weight mapping
      
      * correct n-gram stream attention mask as discussed with qweizhen
      
      * remove unused import
      
      * fix index.rst
      
      * fix tests
      
      * delete unnecessary code
      
      * add fast integration test
      
      * rename weights
      
      * final weight remapping
      
      * save intermediate
      
      * Descriptions for Prophetnet Config File
      
      * finish all models
      
      * finish new model outputs
      
      * delete unnecessary files
      
      * refactor encoder layer
      
      * add dummy docs
      
      * code quality
      
      * fix tests
      
      * add model pages to doctree
      
      * further refactor
      
      * more refactor, more tests
      
      * finish code refactor and tests
      
      * remove unnecessary files
      
      * further clean up
      
      * add docstring template
      
      * finish tokenizer doc
      
      * finish prophetnet
      
      * fix copies
      
      * fix typos
      
      * fix tf tests
      
      * fix fp16
      
      * fix tf test 2nd try
      
      * fix code quality
      
      * add test for each model
      
      * merge new tests to branch
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update model_cards/microsoft/prophetnet-large-uncased-cnndm/README.md
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update src/transformers/modeling_prophetnet.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * Update utils/check_repo.py
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      
      * apply sams and sylvains comments
      
      * make style
      
      * remove unnecessary code
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update README.md
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/configuration_prophetnet.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * implement lysandres comments
      
      * correct docs
      
      * fix isort
      
      * fix tokenizers
      
      * fix copies
      Co-authored-by: default avatarweizhen <weizhen@mail.ustc.edu.cn>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSam Shleifer <sshleifer@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2422cda0
  17. 07 Oct, 2020 1 commit
  18. 01 Oct, 2020 1 commit
  19. 08 Sep, 2020 1 commit
  20. 02 Sep, 2020 1 commit
    • Stas Bekman's avatar
      [testing] fix ambiguous test (#6898) · e71f32c0
      Stas Bekman authored
      Since `generate()` does:
      ```
              num_beams = num_beams if num_beams is not None else self.config.num_beams
      ```
      This test fails if `model.config.num_beams > 1` (which is the case in the model I'm porting).
      
      This fix makes the test setup unambiguous by passing an explicit `num_beams=1` to `generate()`.
      
      Thanks.
      e71f32c0
  21. 26 Aug, 2020 1 commit
  22. 24 Aug, 2020 1 commit
  23. 20 Aug, 2020 1 commit
  24. 19 Aug, 2020 2 commits
  25. 13 Aug, 2020 1 commit
  26. 11 Aug, 2020 1 commit
    • Pradhy729's avatar
      Feed forward chunking (#6024) · b25cec13
      Pradhy729 authored
      
      
      * Chunked feed forward for Bert
      
      This is an initial implementation to test applying feed forward chunking for BERT.
      Will need additional modifications based on output and benchmark results.
      
      * Black and cleanup
      
      * Feed forward chunking in BertLayer class.
      
      * Isort
      
      * add chunking for all models
      
      * fix docs
      
      * Fix typo
      Co-authored-by: default avatarpatrickvonplaten <patrick.v.platen@gmail.com>
      b25cec13
  27. 31 Jul, 2020 1 commit
  28. 30 Jul, 2020 1 commit
    • Sylvain Gugger's avatar
      Switch from return_tuple to return_dict (#6138) · 91cb9546
      Sylvain Gugger authored
      
      
      * Switch from return_tuple to return_dict
      
      * Fix test
      
      * [WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC… (#5614)
      
      * Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleChoice} models and tests
      
      * AutoModels
      
      
      Tiny tweaks
      
      * Style
      
      * Final changes before merge
      
      * Re-order for simpler review
      
      * Final fixes
      
      * Addressing @sgugger's comments
      
      * Test MultipleChoice
      
      * Rework TF trainer (#6038)
      
      * Fully rework training/prediction loops
      
      * fix method name
      
      * Fix variable name
      
      * Fix property name
      
      * Fix scope
      
      * Fix method name
      
      * Fix tuple index
      
      * Fix tuple index
      
      * Fix indentation
      
      * Fix variable name
      
      * fix eval before log
      
      * Add drop remainder for test dataset
      
      * Fix step number + fix logging datetime
      
      * fix eval loss value
      
      * use global step instead of step + fix logging at step 0
      
      * Fix logging datetime
      
      * Fix global_step usage
      
      * Fix breaking loop + logging datetime
      
      * Fix step in prediction loop
      
      * Fix step breaking
      
      * Fix train/test loops
      
      * Force TF at least 2.2 for the trainer
      
      * Use assert_cardinality to facilitate the dataset size computation
      
      * Log steps per epoch
      
      * Make tfds compliant with TPU
      
      * Make tfds compliant with TPU
      
      * Use TF dataset enumerate instead of the Python one
      
      * revert previous commit
      
      * Fix data_dir
      
      * Apply style
      
      * rebase on master
      
      * Address Sylvain's comments
      
      * Address Sylvain's and Lysandre comments
      
      * Trigger CI
      
      * Remove unused import
      
      * Switch from return_tuple to return_dict
      
      * Fix test
      
      * Add recent model
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarJulien Plu <plu.julien@gmail.com>
      91cb9546
  29. 29 Jul, 2020 1 commit
  30. 20 Jul, 2020 1 commit
    • Stas Bekman's avatar
      DataParallel fixes (#5733) · 35cb101e
      Stas Bekman authored
      * DataParallel fixes:
      
      1. switched to a more precise check
      -        if self.args.n_gpu > 1:
      +        if isinstance(model, nn.DataParallel):
      
      2. fix tests - require the same fixup under DataParallel as the training module
      
      * another fix
      35cb101e
  31. 10 Jul, 2020 1 commit
    • Sylvain Gugger's avatar
      Change model outputs types to self-document outputs (#5438) · edfd82f5
      Sylvain Gugger authored
      * [WIP] Proposal for model outputs
      
      * All Bert models
      
      * Make CI green maybe?
      
      * Fix ONNX test
      
      * Isolate ModelOutput from pt and tf
      
      * Formatting
      
      * Add Electra models
      
      * Auto-generate docstrings from outputs
      
      * Add TF outputs
      
      * Add some BERT models
      
      * Revert TF side
      
      * Remove last traces of TF changes
      
      * Fail with a clear error message
      
      * Add Albert and work through Bart
      
      * Add CTRL and DistilBert
      
      * Formatting
      
      * Progress on Bart
      
      * Renames and finish Bart
      
      * Formatting
      
      * Fix last test
      
      * Add DPR
      
      * Finish Electra and add FlauBERT
      
      * Add GPT2
      
      * Add Longformer
      
      * Add MMBT
      
      * Add MobileBert
      
      * Add GPT
      
      * Formatting
      
      * Add Reformer
      
      * Add Roberta
      
      * Add T5
      
      * Add Transformer XL
      
      * Fix test
      
      * Add XLM + fix XLMForTokenClassification
      
      * Style + XLMRoberta
      
      * Add XLNet
      
      * Formatting
      
      * Add doc of return_tuple arg
      edfd82f5
  32. 07 Jul, 2020 1 commit
  33. 01 Jul, 2020 2 commits
  34. 25 Jun, 2020 1 commit
    • Thomas Wolf's avatar
      [Tokenization] Fix #5181 - make #5155 more explicit - move back the default... · 27cf1d97
      Thomas Wolf authored
      [Tokenization] Fix #5181 - make #5155 more explicit - move back the default logging level in tests to WARNING (#5252)
      
      * fix-5181
      
      Padding to max sequence length while truncation to another length was wrong on slow tokenizers
      
      * clean up and fix #5155
      
      * fix XLM test
      
      * Fix tests for Transfo-XL
      
      * logging only above WARNING in tests
      
      * switch slow tokenizers tests in @slow
      
      * fix Marian truncation tokenization test
      
      * style and quality
      
      * make the test a lot faster by limiting the sequence length used in tests
      27cf1d97
  35. 22 Jun, 2020 1 commit
    • Joseph Liu's avatar
      Output hidden states (#4978) · f4e1f022
      Joseph Liu authored
      
      
      * Configure all models to use output_hidden_states as argument passed to foward()
      
      * Pass all tests
      
      * Remove cast_bool_to_primitive in TF Flaubert model
      
      * correct tf xlnet
      
      * add pytorch test
      
      * add tf test
      
      * Fix broken tests
      
      * Configure all models to use output_hidden_states as argument passed to foward()
      
      * Pass all tests
      
      * Remove cast_bool_to_primitive in TF Flaubert model
      
      * correct tf xlnet
      
      * add pytorch test
      
      * add tf test
      
      * Fix broken tests
      
      * Refactor output_hidden_states for mobilebert
      
      * Reset and remerge to master
      Co-authored-by: default avatarJoseph Liu <joseph.liu@coinflex.com>
      Co-authored-by: default avatarpatrickvonplaten <patrick.v.platen@gmail.com>
      f4e1f022
  36. 15 Jun, 2020 1 commit