"tests/layoutlm/test_tokenization_layoutlm.py" did not exist on "901507335f6ed59cad6bbbc2b5d8d9eba8a1b4e1"
  1. 07 Jul, 2021 2 commits
    • Nicolas Patry's avatar
      Adding support for `pipeline("automatic-speech-recognition")`. (#11525) · ebc69afc
      Nicolas Patry authored
      * Adding support for `pipeline("automatic-speech-recognition")`.
      
      - Ugly `"config"` choice for AutoModel. It would be great to have the
      possibility to have something like `AutoModelFor` that would implement
      the same logic (Load the config, check Architectures and load the first
      one)
      
      * Remove `model_id` was not needed in the end.
      
      * Rebased !
      
      * Remove old code.
      
      * Rename `nlp`.
      ebc69afc
    • Daniel Stancl's avatar
      [Flax] Add FlaxMBart (#12236) · 61400e1e
      Daniel Stancl authored
      
      
      * Copy BART to MBart and rename some stuff
      
      * Add copy statements pointing to FlaxBart
      
      * Update/add some common files
      
      * Update shift_tokens_rigth + fix imports
      
      * Fix shift_tokens_right method according to MBart implementation
      
      * Update shift_tokens_right in tests accordingly
      
      * Fix the import issue and update docs file
      * make style quality
      
      * Do some minor changes according to patil-suraj suggestions
      
      * Change the order of normalization layer and attention
      
      * Add some copu statementes
      
      * Update generate method and add integration test for mBart
      
      * Make a few updates after a review
      
      Besides, add `lang_code_to_id` to MBartTokenizeFast
      
      * fix-copies; make style quality
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * Apply suggestions from code review
      
      * fix output type, style
      
      * add copied from
      
      * resolve conflicts
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      61400e1e
  2. 06 Jul, 2021 3 commits
  3. 05 Jul, 2021 1 commit
  4. 02 Jul, 2021 1 commit
  5. 01 Jul, 2021 3 commits
  6. 30 Jun, 2021 3 commits
    • Patrick von Platen's avatar
      [Flax] Add wav2vec2 (#12271) · 0d1f67e6
      Patrick von Platen authored
      
      
      * fix_torch_device_generate_test
      
      * remove @
      
      * start flax wav2vec2
      
      * save intermediate
      
      * forward pass has correct shape
      
      * add weight norm
      
      * add files
      
      * finish ctc
      
      * make style
      
      * finish gumbel quantizer
      
      * correct docstrings
      
      * correct some more files
      
      * fix vit
      
      * finish quality
      
      * correct tests
      
      * correct docstring
      
      * correct tests
      
      * start wav2vec2 pretraining script
      
      * save intermediate
      
      * start pretraining script
      
      * finalize pretraining script
      
      * finish
      
      * finish
      
      * small typo
      
      * finish
      
      * correct
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * make style
      
      * push
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      0d1f67e6
    • NielsRogge's avatar
      Add CANINE (#12024) · 6e685978
      NielsRogge authored
      
      
      * First pass
      
      * More progress
      
      * Add support for local attention
      
      * More improvements
      
      * More improvements
      
      * Conversion script working
      
      * Add CanineTokenizer
      
      * Make style & quality
      
      * First draft of integration test
      
      * Remove decoder test
      
      * Improve tests
      
      * Add documentation
      
      * Mostly docs improvements
      
      * Add CanineTokenizer tests
      
      * Fix most tests on GPU, improve upsampling projection
      
      * Address most comments by @dhgarrette
      
      * Remove decoder logic
      
      * Improve Canine tests, improve docs of CanineConfig
      
      * All tokenizer tests passing
      
      * Make fix-copies and fix tokenizer tests
      
      * Fix test_model_outputs_equivalence test
      
      * Apply suggestions from @sgugger's review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Address some more comments
      
      * Add support for hidden_states and attentions of shallow encoders
      
      * Define custom CanineModelOutputWithPooling, tests pass
      
      * First pass
      
      * More progress
      
      * Add support for local attention
      
      * More improvements
      
      * More improvements
      
      * Conversion script working
      
      * Add CanineTokenizer
      
      * Make style & quality
      
      * First draft of integration test
      
      * Remove decoder test
      
      * Improve tests
      
      * Add documentation
      
      * Mostly docs improvements
      
      * Add CanineTokenizer tests
      
      * Fix most tests on GPU, improve upsampling projection
      
      * Address most comments by @dhgarrette
      
      * Remove decoder logic
      
      * Improve Canine tests, improve docs of CanineConfig
      
      * All tokenizer tests passing
      
      * Make fix-copies and fix tokenizer tests
      
      * Fix test_model_outputs_equivalence test
      
      * Apply suggestions from @sgugger's review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Address some more comments
      
      * Make conversion script work for Canine-c too
      
      * Fix tokenizer tests
      
      * Remove file
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      6e685978
    • Sylvain Gugger's avatar
      Fix default bool in argparser (#12424) · c9486fd0
      Sylvain Gugger authored
      * Fix default bool in argparser
      
      * Add more to test
      c9486fd0
  7. 29 Jun, 2021 4 commits
  8. 28 Jun, 2021 1 commit
  9. 25 Jun, 2021 1 commit
  10. 24 Jun, 2021 1 commit
  11. 23 Jun, 2021 6 commits
    • Michael Benayoun's avatar
    • Lysandre's avatar
      941b4442
    • Sylvain Gugger's avatar
      Clean push to hub API (#12187) · 53c60bab
      Sylvain Gugger authored
      
      
      * Clean push to hub API
      
      * Create working dir if it does not exist
      
      * Different tweak
      
      * New API + all models + test Flax
      
      * Adds the Trainer clean up
      
      * Update src/transformers/file_utils.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Address review comments
      
      * (nit) output types
      
      * No need to set clone_from when folder exists
      
      * Update src/transformers/trainer.py
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      
      * Add generated_from_trainer tag
      
      * Update to new version
      
      * Fixes
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarJulien Chaumond <julien@huggingface.co>
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      53c60bab
    • Vasudev Gupta's avatar
      Flax T5 (#12150) · e98233dd
      Vasudev Gupta authored
      
      
      * copy pytorch-t5
      
      * init
      
      * boom boom
      
      * forward pass same
      
      * make generation work
      
      * add more tests
      
      * make test work
      
      * finish normal tests
      
      * make fix-copies
      
      * finish quality
      
      * correct slow example
      
      * correct slow test
      
      * version table
      
      * upload models
      
      * Update tests/test_modeling_flax_t5.py
      
      * correct incorrectly deleted line
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick@huggingface.co>
      e98233dd
    • Daniel Stancl's avatar
      Add output in a dictionary for TF `generate` method (#12139) · 26a2e365
      Daniel Stancl authored
      * Add output args to greedy search
      
      * Fix critical typo + make style quality
      
      * Handle generate_beam_search
      
      * Add dict_specific tests and fix the placement of encoder outputs
      
      * Add  specific outputs
      
      * Update doc
      
      * Fix typo
      
      * Adjust handling encoder_outputs + Fix generating for T5
      
      * Fix generate for RAG
      
      * Fix handling ouptut_attentions when target_mapping is not None
      
      Take care of situations when target_mapping is provided
      as there are 2-tuple of attentions
      
      Change from:
      if inputs["output_attentions"]:
          attentions = tuple(tf.transpose(t, perm(2, 3, 0, 1)) for t in attentions)
      
      to:
      if inputs["output_attentions"]:
          if inputs["target_mapping"] is not None:
              # when target_mapping is provided, there are 2-tuple of attentions
               attentions = tuple(
                   tuple(tf.transpose(attn_stream, perm=(2, 3, 0, 1)) for attn_stream in t) for t in attentions
              )
          else:
              attentions = tuple(tf.transpose(t, perm=(2, 3, 0, 1)) for t in attentions)
      
      * Rename kwargs to model_kwargs
      
      * make style quality
      
      * Move imports in test_modeling_tf_common.py
      
      Move ModelOutput-related imports in test_modeling_tf_common.py
      into the `is_tf_available():` statement.
      
      * Rewrite nested if-statements
      
      * Fix added tests
      26a2e365
    • Nicolas Patry's avatar
      Optimizing away the `fill-mask` pipeline. (#12113) · d4be4984
      Nicolas Patry authored
      
      
      * Optimizing away the `fill-mask` pipeline.
      
      - Don't send anything to the tokenizer unless needed. Vocab check is
      much faster
      - Keep BC by sending data to the tokenizer when needed. User handling warning messages will see performance benefits again
      - Make `targets` and `top_k` work together better `top_k` cannot be
      higher than `len(targets)` but can be smaller still.
      - Actually simplify the `target_ids` in case of duplicate (it can happen
      because we're parsing raw strings)
      - Removed useless code to fail on empty strings. It works only if empty
      string is in first position, moved to ignoring them instead.
      - Changed the related tests as only the tests would fail correctly
      (having incorrect value in first position)
      
      * Make tests compatible for 2 different vocabs... (at the price of a
      warning).
      
      Co-authored-by: @EtaoinWu
      
      * ValueError working globally
      
      * Update src/transformers/pipelines/fill_mask.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * `tokenizer.vocab` -> `tokenizer.get_vocab()` for more compatiblity +
      fallback.
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      d4be4984
  12. 22 Jun, 2021 3 commits
  13. 21 Jun, 2021 4 commits
  14. 17 Jun, 2021 2 commits
  15. 16 Jun, 2021 2 commits
  16. 15 Jun, 2021 2 commits
  17. 14 Jun, 2021 1 commit