"backend/vscode:/vscode.git/clone" did not exist on "2c030218f8f2e2872cbdaf2e1d70d1dee0bedd53"
  1. 25 Feb, 2020 1 commit
  2. 24 Feb, 2020 4 commits
  3. 22 Feb, 2020 2 commits
  4. 21 Feb, 2020 1 commit
    • Patrick von Platen's avatar
      Improve special_token_id logic in run_generation.py and add tests (#2885) · fc38d4c8
      Patrick von Platen authored
      
      
      * improving generation
      
      * finalized special token behaviour for no_beam_search generation
      
      * solved modeling_utils merge conflict
      
      * solve merge conflicts in modeling_utils.py
      
      * add run_generation improvements from PR #2749
      
      * adapted language generation to not use hardcoded -1 if no padding token is available
      
      * remove the -1 removal as hard coded -1`s are not necessary anymore
      
      * add lightweight language generation testing for randomely initialized models - just checking whether no errors are thrown
      
      * add slow language generation tests for pretrained models using hardcoded output with pytorch seed
      
      * delete ipdb
      
      * check that all generated tokens are valid
      
      * renaming
      
      * renaming Generation -> Generate
      
      * make style
      
      * updated so that generate_beam_search has same token behavior than generate_no_beam_search
      
      * consistent return format for run_generation.py
      
      * deleted pretrain lm generate tests -> will be added in another PR
      
      * cleaning of unused if statements and renaming
      
      * run_generate will always return an iterable
      
      * make style
      
      * consistent renaming
      
      * improve naming, make sure generate function always returns the same tensor, add docstring
      
      * add slow tests for all lmhead models
      
      * make style and improve example comments modeling_utils
      
      * better naming and refactoring in modeling_utils
      
      * improving generation
      
      * finalized special token behaviour for no_beam_search generation
      
      * solved modeling_utils merge conflict
      
      * solve merge conflicts in modeling_utils.py
      
      * add run_generation improvements from PR #2749
      
      * adapted language generation to not use hardcoded -1 if no padding token is available
      
      * remove the -1 removal as hard coded -1`s are not necessary anymore
      
      * add lightweight language generation testing for randomely initialized models - just checking whether no errors are thrown
      
      * add slow language generation tests for pretrained models using hardcoded output with pytorch seed
      
      * delete ipdb
      
      * check that all generated tokens are valid
      
      * renaming
      
      * renaming Generation -> Generate
      
      * make style
      
      * updated so that generate_beam_search has same token behavior than generate_no_beam_search
      
      * consistent return format for run_generation.py
      
      * deleted pretrain lm generate tests -> will be added in another PR
      
      * cleaning of unused if statements and renaming
      
      * run_generate will always return an iterable
      
      * make style
      
      * consistent renaming
      
      * improve naming, make sure generate function always returns the same tensor, add docstring
      
      * add slow tests for all lmhead models
      
      * make style and improve example comments modeling_utils
      
      * better naming and refactoring in modeling_utils
      
      * changed fast random lm generation testing design to more general one
      
      * delete in old testing design in gpt2
      
      * correct old variable name
      
      * temporary fix for encoder_decoder lm generation tests - has to be updated when t5 is fixed
      
      * adapted all fast random generate tests to new design
      
      * better warning description in modeling_utils
      
      * better comment
      
      * better comment and error message
      Co-authored-by: default avatarThomas Wolf <thomwolf@users.noreply.github.com>
      fc38d4c8
  5. 20 Feb, 2020 2 commits
  6. 19 Feb, 2020 3 commits
  7. 18 Feb, 2020 1 commit
  8. 13 Feb, 2020 2 commits
    • Joe Davison's avatar
      Preserve spaces in GPT-2 tokenizers (#2778) · f1e8a51f
      Joe Davison authored
      * Preserve spaces in GPT-2 tokenizers
      
      Preserves spaces after special tokens in GPT-2 and inhereted (RoBERTa)
      tokenizers, enabling correct BPE encoding. Automatically inserts a space
      in front of first token in encode function when adding special tokens.
      
      * Add tokenization preprocessing method
      
      * Add framework argument to pipeline factory
      
      Also fixes pipeline test issue. Each test input now treated as a
      distinct sequence.
      f1e8a51f
    • Sam Shleifer's avatar
      get_activation('relu') provides a simple mapping from strings i… (#2807) · ef74b0f0
      Sam Shleifer authored
      * activations.py contains a mapping from string to activation function
      * resolves some `gelu` vs `gelu_new` ambiguity
      ef74b0f0
  9. 11 Feb, 2020 1 commit
    • Oleksiy Syvokon's avatar
      BERT decoder: Fix causal mask dtype. · ee5de0ba
      Oleksiy Syvokon authored
      PyTorch < 1.3 requires multiplication operands to be of the same type.
      This was violated when using default attention mask (i.e.,
      attention_mask=None in arguments) given BERT in the decoder mode.
      
      In particular, this was breaking Model2Model and made tutorial
      from the quickstart failing.
      ee5de0ba
  10. 07 Feb, 2020 2 commits
  11. 04 Feb, 2020 8 commits
  12. 31 Jan, 2020 1 commit
  13. 30 Jan, 2020 2 commits
    • Julien Chaumond's avatar
      fill_mask helper (#2576) · 9fa836a7
      Julien Chaumond authored
      * fill_mask helper
      
      * [poc] FillMaskPipeline
      
      * Revert "[poc] FillMaskPipeline"
      
      This reverts commit 67eeea55b0f97b46c2b828de0f4ee97d87338335.
      
      * Revert "fill_mask helper"
      
      This reverts commit cacc17b884e14bb6b07989110ffe884ad9e36eaa.
      
      * README: clarify that Pipelines can also do text-classification
      
      cf. question at the AI&ML meetup last week, @mfuntowicz
      
      * Fix test: test feature-extraction pipeline
      
      * Test tweaks
      
      * Slight refactor of existing pipeline (in preparation of new FillMaskPipeline)
      
      * Extraneous doc
      
      * More robust way of doing this
      
      @mfuntowicz as we don't rely on the model name anymore (see AutoConfig)
      
      * Also add RobertaConfig as a quickfix for wrong token_type_ids
      
      * cs
      
      * [BIG] FillMaskPipeline
      9fa836a7
    • Lysandre's avatar
      Rename test_examples to test_doc_samples · df27648b
      Lysandre authored
      df27648b
  14. 29 Jan, 2020 2 commits
  15. 28 Jan, 2020 1 commit
  16. 27 Jan, 2020 2 commits
  17. 23 Jan, 2020 5 commits