"examples/vscode:/vscode.git/clone" did not exist on "a898fb95bdb8ded526eda6b0affd5802367f967d"
  1. 18 May, 2021 2 commits
    • Sylvain Gugger's avatar
      Fix checkpoint deletion (#11748) · a515caa3
      Sylvain Gugger authored
      a515caa3
    • Nicolas Patry's avatar
      [TokenClassification] Label realignment for subword aggregation (#11680) · b88e0e01
      Nicolas Patry authored
      * [TokenClassification] Label realignment for subword aggregation
      
      Tentative to replace https://github.com/huggingface/transformers/pull/11622/files
      
      
      
      - Added `AggregationStrategy`
      - `ignore_subwords` and `grouped_entities` arguments are now fused
        into `aggregation_strategy`. It makes more sense anyway because
        `ignore_subwords=True` with `grouped_entities=False` did not have a
        meaning anyway.
      - Added 2 new ways to aggregate which are MAX, and AVERAGE
      - AVERAGE requires a bit more information than the others, for now this
      case is slightly specific, we should keep that in mind for future
      changes.
      - Testing has been modified to reflect new argument, and to check the
      correct deprecation and the new aggregation_strategy.
      - Put the testing argument and testing results for aggregation_strategy,
      close together, so that readers can understand what is supposed to
      happen.
      - `aggregate` is now only tested on a small model as it does not mean
      anything to test it globally for all models.
      - Previous tests are unchanged in desired output.
      - Added a new test case that showcases better the difference between the
        FIRST, MAX and AVERAGE strategies.
      
      * Wrong framework.
      
      * Addressing three issues.
      
      1- Tags might not follow B-, I- convention, so any tag should work now
      (assumed as B-TAG)
      2- Fixed an issue with average that leads to a substantial code change.
      3- The testing suite was not checking for the "index" key for "none"
      strategy. This is now fixed.
      
      The issue is that "O" could not be chosen by AVERAGE strategy because
      those tokens were filtered out beforehand, so their relative scores were
      not counted in the average. Now filtering on
      ignore_labels will happen at the very end of the pipeline fixing
      that issue.
      It's a bit hard to make sure this stays like that because we do
      not have a end-to-end test for that behavior
      
      * Formatting.
      
      * Adding formatting to code + cleaner handling of B-, I- tags.
      Co-authored-by: default avatarFrancesco Rubbo <rubbo.francesco@gmail.com>
      Co-authored-by: default avatarelk-cloner <rezakakhki.rk@gmail.com>
      
      * Typo.
      Co-authored-by: default avatarFrancesco Rubbo <rubbo.francesco@gmail.com>
      Co-authored-by: default avatarelk-cloner <rezakakhki.rk@gmail.com>
      b88e0e01
  2. 17 May, 2021 1 commit
  3. 14 May, 2021 1 commit
  4. 13 May, 2021 3 commits
    • Volodymyr Byno's avatar
    • lexhuismans's avatar
      [T5] Add 3D attention mask to T5 model (2) (#9643) (#11197) · 91cf2915
      lexhuismans authored
      * Add 3D attention mask to T5 model (#9643)
      
      Added code for 3D attention mask in T5 model. Similar to BERT model.
      
      * Add test for 3D attention mask
      
      Added test for 3D attention mask: test_decoder_model_past_with_3d_attn_mask()
      3D attention mask of the shape [Batch_size, Seq_length, Seq_length] both for
      attention mask and decoder attention mask. Test is passing.
      91cf2915
    • Philip May's avatar
      Enable option for subword regularization in more tokenizers. (#11417) · 37ed3ab7
      Philip May authored
      * improve slow class tok usage at xlm rob
      
      * add subword regularization for barthez
      
      * improve barthez tok. test
      
      * fix tokenizer tests
      
      * add subword regularization for camembert
      
      * add subword regularization for deberta v2 tokenizer
      
      * add more doc to deberta v2 tokenizer
      
      * add subword regularization for speech to text tok.
      
      * fix sp_model_kwargs type in speech 2 text tok.
      
      * add subword regularization for M2M100 tok.
      
      * add more concrete type hints
      
      * fix tests for m2m100 and s2t tok.
      
      * add missing Any import
      
      * fix syntax error in m2m100 tok.
      
      * fix unpickle of m2m100 and s2t tok.
      
      * fix test of m2m100 and s2t tok.
      
      * improve unpickle of deberta v2 tok.
      
      * add test for pickle of barthez & camembert
      
      * fix pickle of barthez & camembert
      
      * add test for deberta v2 tok. pickle
      
      * fix m2m100 tok. pickle
      
      * fix s2t tok. pickle
      
      * add subword regularization to albert tok.
      
      * refactor subword reg. test into TokenizerTesterMixin
      
      improve albert tok. test
      
      remove sample argument form albert tok.
      
      check subword reg. using TokenizerTesterMixin
      
      improve tok. tests
      
      improve xlm roberta tok. tests
      
      improve xlm roberta tok. tests
      
      * add subword regularization for big bird t.
      
      * improve xlm roberta tok. test
      
      * add subword regularization for mbart50 tok.
      
      * add subword regularization for pegasus tok.
      
      * add subword regularization for reformer tok.
      
      * add subword regularization for T5 tok.
      
      * fix t5 tok. test formatting
      
      * add subword regularization for xlm_proph. tok.
      
      * add subword regularization for xlnet tok.
      
      * add subword regularization for gert_gen tok.
      
      * add typing to tokenizers
      
      * add typing to xlm rob. tok
      
      * add subword regularization for marian tok.
      
      * add reverse tok. test
      
      * fix marian tok test
      
      * fix marian tok test
      
      * fix casing in tok. tests
      
      * fix style of tok. common test
      
      * fix deberta v2 tok test
      
      * add type annotations to tok. tests
      
      * add type annotations to tok. __init__
      
      * add typing to kokenizer
      
      * add type annotations to tok. __init__
      
      * don't specify the default when it's None
      
      * fix barthez tok. doc
      
      * move sentencepiece tok. tests to TokenizerTesterMixin
      
      * fix unused imports
      
      * fix albert tok. test
      
      * add comment to sentencepiece test options
      
      * fix Any import at big bird tok.
      
      * fix Any import at xlm prophetnet tok.
      
      * empty commit to trigger CI
      37ed3ab7
  5. 12 May, 2021 2 commits
    • NielsRogge's avatar
      Vit deit fixes (#11309) · fa84540e
      NielsRogge authored
      
      
      * Improve docs of DeiT and ViT, add community notebook
      
      * Add gitignore for test_samples
      
      * Add notebook with Trainer
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      fa84540e
    • Suraj Patil's avatar
      CLIP (#11445) · 8719afa1
      Suraj Patil authored
      
      
      * begin second draft
      
      * fix import, style
      
      * add loss
      
      * fix embeds, logits_scale, and projection
      
      * fix imports
      
      * add conversion script
      
      * add feature_extractor and processor
      
      * style
      
      * add tests for tokenizer, extractor and processor
      
      * add vision model tests
      
      * add weight init
      
      * add more tests
      
      * fix save_load  test
      
      * model output, dosstrings, causal mask
      
      * config doc
      
      * add clip model tests
      
      * return dict
      
      * bigin integration test
      
      * add integration tests
      
      * fix-copies
      
      * fix init
      
      * Clip => CLIP
      
      * fix module name
      
      * docs
      
      * fix doc
      
      * output_dim => projection_dim
      
      * fix checkpoint names
      
      * remoe fast tokenizer file
      
      * fix conversion script
      
      * fix tests, quality
      
      * put causal mask on device
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * fix attribute test
      
      * style
      
      * address sylvains comments
      
      * style
      
      * fix docstrings
      
      * add qucik_gelu in activations, docstrings
      
      * clean-up attention test
      
      * fix act fun
      
      * fix config
      
      * fix torchscript tests
      
      * even batch_size
      
      * remove comment
      
      * fix ouput tu_tuple
      
      * fix save load tests
      
      * fix add tokens test
      
      * add fast tokenizer
      
      * update copyright
      
      * new processor API
      
      * fix docs
      
      * docstrings
      
      * docs
      
      * fix doc
      
      * fix doc
      
      * fix tokenizer
      
      * fix import in doc example
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * check types of config
      
      * valhalla => openai
      
      * load image using url
      
      * fix test
      
      * typo
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      8719afa1
  6. 11 May, 2021 2 commits
  7. 10 May, 2021 2 commits
    • Pavel Soriano's avatar
      Fixes NoneType exception when topk is larger than one coupled with a small... · 9120ae7d
      Pavel Soriano authored
      Fixes NoneType exception when topk is larger than one coupled with a small context in the Question-Answering pipeline (#11628)
      
      * added fix to decode function. added test to qa pipeline tests
      
      * completed topk docstring
      
      * fixed formatting with black
      
      * applied style_doc to fix line length
      9120ae7d
    • Tanmay Laud's avatar
      Big Bird Fast Tokenizer implementation (#11075) · f7f87295
      Tanmay Laud authored
      
      
      * Added Big Bird Fast Tokenizer initial file
      
      * style fixes
      
      * flake fixes
      
      * Added big bird fast tokenizer to init files
      
      * Added big bird fast to Auto tokenization
      
      * fix styles
      
      * minor quality fixes
      
      * Added initial test code
      
      * Fix SpmConverter when precompiled_charsmap doesn't exist
      
      * fixed post processor
      
      * minor style fix
      
      * minor fix input names
      
      * Actually fix identity normalization
      
      * style
      
      * Added token type ids to fast tokenizer
      
      * style
      
      * flake fix
      
      * fix copies
      Co-authored-by: default avatarAnthony MOI <m.anthony.moi@gmail.com>
      f7f87295
  8. 07 May, 2021 2 commits
  9. 06 May, 2021 1 commit
  10. 05 May, 2021 2 commits
  11. 04 May, 2021 5 commits
  12. 03 May, 2021 2 commits
    • Muktan's avatar
      [Wav2vec2] Fixed tokenization mistakes while adding single-char tokens to tokenizer (#11538) · a721a5ee
      Muktan authored
      
      
      * Fixed tokenization mistakes while adding single-char tokens to tokenizer
      
      * Added tests and Removed unnecessary comments.
      
      * finalize wav2vec2 tok
      
      * add more aggressive tests
      
      * Apply suggestions from code review
      
      * fix useless import
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      a721a5ee
    • NielsRogge's avatar
      Add LUKE (#11223) · f3cf8ae7
      NielsRogge authored
      
      
      * Rebase with master
      
      * Minor bug fix in docs
      
      * Copy files from adding_luke_v2 and improve docs
      
      * change the default value of use_entity_aware_attention to True
      
      * remove word_hidden_states
      
      * fix head models
      
      * fix tests
      
      * fix the conversion script
      
      * add integration tests for the pretrained large model
      
      * improve docstring
      
      * Improve docs, make style
      
      * fix _init_weights for pytorch 1.8
      
      * improve docs
      
      * fix tokenizer to construct entity sequence with [MASK] entity when entities=None
      
      * Make fix-copies
      
      * Make style & quality
      
      * Bug fixes
      
      * Add LukeTokenizer to init
      
      * Address most comments by @patil-suraj and @LysandreJik
      
      * rename _compute_extended_attention_mask to get_extended_attention_mask
      
      * add comments to LukeSelfAttention
      
      * fix the documentation of the tokenizer
      
      * address comments by @patil-suraj, @LysandreJik, and @sgugger
      
      * improve docs
      
      * Make style, quality and fix-copies
      
      * Improve docs
      
      * fix docs
      
      * add "entity_span_classification" task
      
      * update example code for LukeForEntitySpanClassification
      
      * improve docs
      
      * improve docs
      
      * improve the code example in luke.rst
      
      * rename the classification layer in LukeForEntityClassification from typing to classifier
      
      * add bias to the classifier in LukeForEntitySpanClassification
      
      * update docs to use fine-tuned hub models in code examples of the head models
      
      * update the example sentences
      
      * Make style & quality
      
      * Add require_torch to tokenizer tests
      
      * Add require_torch to tokenizer tests
      
      * Address comments by @sgugger and add community notebooks
      
      * Make fix-copies
      Co-authored-by: default avatarIkuya Yamada <ikuya@ikuya.net>
      f3cf8ae7
  13. 30 Apr, 2021 5 commits
    • Stas Bekman's avatar
      [DeepSpeed] fp32 support (#11499) · 4e7bf94e
      Stas Bekman authored
      * prep for deepspeed==0.3.16
      
      * new version
      
      * too soon
      
      * support and test fp32 mode
      
      * troubleshooting doc start
      
      * workaround no longer needed
      
      * add fp32 doc
      
      * style
      
      * cleanup, add tf32 note
      
      * clarify
      
      * release was made
      4e7bf94e
    • Takuya Makino's avatar
      c2cd02ac
    • Shubham Sanghavi's avatar
      30ede899
    • Nicolas Patry's avatar
      Adding `AutomaticSpeechRecognitionPipeline`. (#11337) · db9dd09c
      Nicolas Patry authored
      
      
      * Adding `AutomaticSpeechRecognitionPipeline`.
      
      - Because we added everything to enable this pipeline, we probably
      should add it to `transformers`.
      - This PR tries to limit the scope and focuses only on the pipeline part
      (what should go in, and out).
      - The tests are very specific for S2T and Wav2vec2 to make sure both
      architectures are supported by the pipeline. We don't use the mixin for
      tests right now, because that requires more work in the `pipeline`
      function (will be done in a follow up PR).
      - Unsure about the "helper" function `ffmpeg_read`. It makes a lot of
        sense from a user perspective, it does not add any additional
      dependencies (as in hard dependency, because users can always use their
      own load mechanism). Meanwhile, it feels slightly clunky to have so much
      optional preprocessing.
      - The pipeline is not done to support streaming audio right now.
      
      Future work:
      
      - Add `automatic-speech-recognition` as a `task`. And add the
      FeatureExtractor.from_pretrained within `pipeline` function.
      - Add small models within tests
      - Add the Mixin to tests.
      - Make the logic between ForCTC vs ForConditionalGeneration better.
      
      * Update tests/test_pipelines_automatic_speech_recognition.py
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Adding docs + main import + type checking + LICENSE.
      
      * Doc style !.
      
      * Fixing TYPE_HINT.
      
      * Specifying waveform shape in the docs.
      
      * Adding asserts + specify in the documentation the shape of the input
      np.ndarray.
      
      * Update src/transformers/pipelines/automatic_speech_recognition.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Adding require to tests + move the `feature_extractor` doc.
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      db9dd09c
    • Philip May's avatar
      add sp_model_kwargs to unpickle of xlm roberta tok (#11430) · e0db8276
      Philip May authored
      add test for pickle
      
      simplify test
      
      fix test code style
      
      add missing pickle import
      
      fix test
      
      fix test
      
      fix test
      e0db8276
  14. 29 Apr, 2021 1 commit
  15. 26 Apr, 2021 6 commits
  16. 25 Apr, 2021 2 commits
    • cronoik's avatar
      EncoderDecoderConfigs should not create new objects (#11300) · 35cd8eed
      cronoik authored
      
      
      * removes the creation of separate config objects and uses the existing ones instead+overwrite resize_token_embeddings from parent class because it is not working for the EncoderDecoderModel
      
      * rollback to current version of the huggingface master branch
      
      * reworked version that ties the encoder and decoder config of the parent encoderdecoder instance
      
      * overwrite of resize_token_embeddings throws an error now
      
      * review comment suggestion
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      
      * implemented warning in case encoderdecoder is created with differing configs of encoderdecoderconfig and decoderconfig or encoderconfig
      
      * added test to avoid diverging configs of wrapper class and wrapped classes
      
      * Update src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
      
      * make style
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      35cd8eed
    • Daniel Stancl's avatar
      Add head_mask, decoder_head_mask, cross_head_mask to ProphetNet (#9964) · f45cb66b
      Daniel Stancl authored
      * Add head_mask & decoder_head_mask + some corrections
      
      * Fix head masking for N-grams
      
      * Enable test_headmasking for encoder and decod
      
      * Fix one typo regarding in modeling_propgetnet.py
      
      * Enable test_headmasking for ProphetNetStandaloneDecoderModelTest
      and ProphetNetStandaloneEncoderModelTest in test_modeling_prophetnet.py
      
      * make style
      
      * Fix cross_head_mask
      
      * Fix attention head mask naming
      
      * `cross_head_mask` -> `cross_attn_head_mask`
      
      * `cross_layer_head_mask` -> `cross_attn_layer_head_mask`
      
      * Still need to merge #10605 to master to pass the tests
      f45cb66b
  17. 23 Apr, 2021 1 commit