1. 30 Aug, 2021 1 commit
  2. 27 Aug, 2021 7 commits
  3. 26 Aug, 2021 9 commits
  4. 25 Aug, 2021 2 commits
  5. 24 Aug, 2021 1 commit
  6. 23 Aug, 2021 5 commits
    • Yih-Dar's avatar
      Make Flax GPT2 working with cross attention (#13008) · 2e20c0f3
      Yih-Dar authored
      
      
      * make flax gpt2 working with cross attention
      
      * Remove encoder->decoder projection layer
      
      * A draft (incomplete) for FlaxEncoderDecoderModel
      
      * Add the method from_encoder_decoder_pretrained + the docstrings
      
      * Fix the mistakes of using EncoderDecoderModel
      
      * Fix style
      
      * Add FlaxEncoderDecoderModel to the library
      
      * Fix cyclic imports
      
      * Add FlaxEncoderDecoderModel to modeling_flax_auto.py
      
      * Remove question comments
      
      * add tests for FlaxEncoderDecoderModel
      
      * add flax_encoder_decoder to the lists of ignored entries in check_repo.py
      
      * fix missing required positional arguments
      
      * Remove **kwargs when creating FlaxEncoderDecoderModel in from_encoder_decoder_pretrained()
      
      Also fix generation eos/pad tokens issue
      
      * Fix: Use sequences from the generated_output
      
      * Change a check from assert to raise ValueError
      
      * Fix examples and token ids issues
      
      * Fix missing all_cross_attentions when outputting tuple in modeling_gpt2
      
      * Remove the changes in configuration docstrings.
      
      * allow for bert 2 gpt2
      
      * make fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Change remaining examples to bert2gpt2
      
      * Change the test to Bert2GPT2
      
      * Fix examples
      
      * Fix import
      
      * Fix unpack bug
      
      * Rename to FlaxEncoderDecoderModelTest and change the test to bert2gpt2
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Fix: NotImplentedError -> NotImplementedError
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * up
      
      * finalize
      Co-authored-by: default avatarydshieh <ydshieh@user.noreply>
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      2e20c0f3
    • SaulLu's avatar
      Change how "additional_special_tokens" argument in the ".from_pretrained"... · 7223844d
      SaulLu authored
      Change how "additional_special_tokens" argument in the ".from_pretrained" method of the tokenizer is taken into account (#13056)
      
      * add test
      
      * add change in PretrainedTokenizerBase
      
      * change Luke
      
      * deactivate
      
      * add the possibility to add additional special tokens for M2M100
      
      * format
      
      * add special test for canine
      
      * proposed changes for mbart
      
      * proposed changes for mbart50
      
      * proposed changes for byt5
      
      * proposed changes for canine
      
      * proposed changes for t5
      
      * test fast and slow
      
      * remove comment
      
      * remove comment
      
      * add fast version for all tests
      
      * replace break by continue
      
      * add more comments
      
      * add check to avoid duplicates
      
      * remove comment
      
      * format
      
      * proposed change for wave2vec2
      
      * reverse changes mbart
      
      * uncomment
      
      * format
      7223844d
    • Philipp Schmid's avatar
      SageMaker: Fix sagemaker DDP & metric logs (#13181) · f689743e
      Philipp Schmid authored
      
      
      * Barrier -> barrier
      
      * added logger for metrics
      
      * removed stream handler in trainer
      
      * moved handler
      
      * removed streamhandler from trainer
      
      * updated test image and instance type added datasets version to test
      
      * Update tests/sagemaker/scripts/pytorch/requirements.txt
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      f689743e
    • NielsRogge's avatar
      Add min and max question length options to TapasTokenizer (#12803) · 8679bd71
      NielsRogge authored
      * Add min and max question length option to the tokenizer
      
      * Add corresponding test
      8679bd71
    • NielsRogge's avatar
  7. 19 Aug, 2021 1 commit
  8. 18 Aug, 2021 1 commit
  9. 17 Aug, 2021 1 commit
  10. 13 Aug, 2021 1 commit
    • Nicolas Patry's avatar
      Moving fill-mask pipeline to new testing scheme (#12943) · d58926ab
      Nicolas Patry authored
      * Fill mask pipelines test updates.
      
      * Model eval !!
      
      * Adding slow test with actual values.
      
      * Making all tests pass (skipping quite a bit.)
      
      * Doc styling.
      
      * Better doc cleanup.
      
      * Making an explicit test with no pad token tokenizer.
      
      * Typo.
      d58926ab
  11. 12 Aug, 2021 4 commits
  12. 10 Aug, 2021 1 commit
  13. 09 Aug, 2021 1 commit
  14. 06 Aug, 2021 3 commits
  15. 05 Aug, 2021 2 commits