1. 15 Mar, 2021 6 commits
  2. 12 Mar, 2021 7 commits
    • Stas Bekman's avatar
      AdamW is now supported by default (#9624) · 4c32f9f2
      Stas Bekman authored
      4c32f9f2
    • ymfa's avatar
      Pass encoder outputs into GenerationMixin (#10599) · fa35cda9
      ymfa authored
      * Pass encoder_outputs into generate()
      
      * Remove an if-statement
      
      * Reformat
      
      * Minimize changes to generate()
      
      * Comment on input_ids
      fa35cda9
    • PaulLerner's avatar
      fix: #10628 expanduser path in TrainingArguments (#10660) · 00cad2e5
      PaulLerner authored
      
      
      * fix: #10628 expanduser path in TrainingArguments
      
      * docs: explain why we expand paths in TrainingArguments
      
      * Style
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      00cad2e5
    • Sylvain Gugger's avatar
      Add auto_wrap option in fairscale integration (#10673) · e8246f78
      Sylvain Gugger authored
      * Add auto_wrap option in fairscale integration
      
      * Style
      e8246f78
    • Lysandre Debut's avatar
      TensorFlow tests: having from_pt set to True requires torch to be installed. (#10664) · 184ef8ec
      Lysandre Debut authored
      * TF model exists for Blenderbot 400M
      
      * Marian
      
      * RAG
      184ef8ec
    • Nicolas Patry's avatar
      Adding new parameter to `generate`: `max_time`. (#9846) · 543d0549
      Nicolas Patry authored
      * [WIP] Adding new parameter to `generate`:  `max_time`.
      
      Generation by tokens number is sometimes a bit clunky because we don't
      know how many tokens are good enough or even how many tokens are in
      the payload (for pipelines users for instance). This leads to hard
      to understand behavior.
      
      This PR proposes a new argument `max_time` which is a float of seconds
      for the allowed time for `generate` to run on.
      Ideally combinations of `max_tokens=None`, `max_time=2` could be used to
      generate as many tokens as possible within time budget.
      
      NB: Another possible approach consists of passing a callback to `generate`
        putting the caller in charge of the actual decision of when to stop
        generating tokens. It opens the door to 'which args should we pass'
        to this callback. It's hard to imagine other use-cases for this
        early stopping behavior than time (that are not already covered by
        parameters of generate)
      
      * Revamp with StoppingCriteria
      
      * Removing deprecated mentions.
      
      * Forgot arguments to stopping criteria.
      
      * Readding max_length it's not just used as a stopping criteria.
      
      * Default value for `stopping_criteria`.
      
      * Address @patrickvonplaten comments.
      
      - More docstrings
      - Actual doc
      - Include in global namespace
      - Remove TF work.
      
      * Put back `max_length` (deprecation different PR).
      
      * Doc quality.
      
      * Fixing old behavior without `stopping_criteria` but with `max_length`.
      
      Making sure we don't break that in the future.
      
      * Adding more tests for possible inconsistencies between
      
      `max_length` and `stopping_criteria`.
      
      * Fixing the torch imports.
      543d0549
    • Lysandre Debut's avatar
      Adjust loss difference (#10669) · ea46e3fa
      Lysandre Debut authored
      ea46e3fa
  3. 11 Mar, 2021 17 commits
  4. 10 Mar, 2021 8 commits
    • Sylvain Gugger's avatar
      26a33cfd
    • Philipp Schmid's avatar
      Extend trainer logging for sm (#10633) · 49c61a4a
      Philipp Schmid authored
      * renamed logging to hf_logging
      
      * changed logging from hf_logging to logging and loggin to native_logging
      
      * removed everything trying to fix import Trainer error
      
      * adding imports again
      
      * added custom add_handler function to logging.py
      
      * make style
      
      * added remove_handler
      
      * added another conditional to assert
      49c61a4a
    • Sylvain Gugger's avatar
      Fix GPU tests with speech · 1aa9c13f
      Sylvain Gugger authored
      1aa9c13f
    • Sylvain Gugger's avatar
      Copy tokenizer files in each of their repo (#10624) · 2295d783
      Sylvain Gugger authored
      * Move tokenizer files in each repo
      
      * Fix mBART50 tests
      
      * Fix mBART tests
      
      * Fix Marian tests
      
      * Update templates
      2295d783
    • Suraj Patil's avatar
      Speech2TextTransformer (#10175) · d26b37e7
      Suraj Patil authored
      
      
      * s2t
      
      * fix config
      
      * conversion script
      
      * fix import
      
      * add tokenizer
      
      * fix tok init
      
      * fix tokenizer
      
      * first version working
      
      * fix embeds
      
      * fix lm head
      
      * remove extra heads
      
      * fix convert script
      
      * handle encoder attn mask
      
      * style
      
      * better enc attn mask
      
      * override _prepare_attention_mask_for_generation
      
      * handle attn_maks in encoder and decoder
      
      * input_ids => input_features
      
      * enable use_cache
      
      * remove old code
      
      * expand embeddings if needed
      
      * remove logits bias
      
      * masked_lm_loss => loss
      
      * hack tokenizer to support feature processing
      
      * fix model_input_names
      
      * style
      
      * fix error message
      
      * doc
      
      * remove inputs_embeds
      
      * remove input_embeds
      
      * remove unnecessary docstring
      
      * quality
      
      * SpeechToText => Speech2Text
      
      * style
      
      * remove shared_embeds
      
      * subsample => conv
      
      * remove Speech2TextTransformerDecoderWrapper
      
      * update output_lengths formula
      
      * fix table
      
      * remove max_position_embeddings
      
      * update conversion scripts
      
      * add possibility to do upper case for now
      
      * add FeatureExtractor and Processor
      
      * add tests for extractor
      
      * require_torch_audio => require_torchaudio
      
      * add processor test
      
      * update import
      
      * remove classification head
      
      * attention mask is now 1D
      
      * update docstrings
      
      * attention mask should be of type long
      
      * handle attention mask from generate
      
      * alwyas return attention_mask
      
      * fix test
      
      * style
      
      * doc
      
      * Speech2TextTransformer => Speech2Text
      
      * Speech2TextTransformerConfig => Speech2TextConfig
      
      * remove dummy_inputs
      
      * nit
      
      * style
      
      * multilinguial tok
      
      * fix tokenizer
      
      * add tgt_lang setter
      
      * save lang_codes
      
      * fix tokenizer
      
      * add forced_bos_token_id to tokenizer
      
      * apply review suggestions
      
      * add torchaudio to extra deps
      
      * add speech deps to CI
      
      * fix dep
      
      * add libsndfile to ci
      
      * libsndfile1
      
      * add speech to extras all
      
      * libsndfile1 -> libsndfile1
      
      * libsndfile
      
      * libsndfile1-dev
      
      * apt update
      
      * add sudo to install
      
      * update deps table
      
      * install libsndfile1-dev on CI
      
      * tuple to list
      
      * init conv layer
      
      * add model tests
      
      * quality
      
      * add integration tests
      
      * skip_special_tokens
      
      * add speech_to_text_transformer in toctree
      
      * fix tokenizer
      
      * fix fp16 tests
      
      * add tokenizer tests
      
      * fix copyright
      
      * input_values => input_features
      
      * doc
      
      * add model in readme
      
      * doc
      
      * change checkpoint names
      
      * fix copyright
      
      * fix code example
      
      * add max_model_input_sizes in tokenizer
      
      * fix integration tests
      
      * add do_lower_case to tokenizer
      
      * remove clamp trick
      
      * fix "Add modeling imports here"
      
      * fix copyrights
      
      * fix tests
      
      * SpeechToTextTransformer => SpeechToText
      
      * fix naming
      
      * fix table formatting
      
      * fix typo
      
      * style
      
      * fix typos
      
      * remove speech dep from extras[testing]
      
      * fix copies
      
      * rename doc file,
      
      * put imports under is_torch_available
      
      * run feat extract tests when torch is available
      
      * dummy objects for processor and extractor
      
      * fix imports in tests
      
      * fix import in modeling test
      
      * fxi imports
      
      * fix torch import
      
      * fix imports again
      
      * fix positional embeddings
      
      * fix typo in import
      
      * adapt new extractor refactor
      
      * style
      
      * fix torchscript test
      
      * doc
      
      * doc
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix docs, copied from, style
      
      * fix docstring
      
      * handle imports
      
      * remove speech from all extra deps
      
      * remove s2t from seq2seq lm mapping
      
      * better names
      
      * skip training tests
      
      * add install instructions
      
      * List => Tuple
      
      * doc
      
      * fix conversion script
      
      * fix urls
      
      * add instruction for libsndfile
      
      * fix fp16 test
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      d26b37e7
    • Sylvain Gugger's avatar
      Add new GLUE example with no Trainer. (#10555) · efb5c0a4
      Sylvain Gugger authored
      * Add new GLUE example with no Trainer.
      
      * Style
      
      * Address review comments
      efb5c0a4
    • Suraj Patil's avatar
      remove final_logits_bias (#10606) · 44f64132
      Suraj Patil authored
      44f64132
    • Allen Wang's avatar
      Fixes an issue in `text-classification` where MNLI eval/test datasets are not... · 6f52fce6
      Allen Wang authored
      Fixes an issue in `text-classification` where MNLI eval/test datasets are not being preprocessed. (#10621)
      
      * Fix MNLI tests
      
      * Linter fix
      6f52fce6
  5. 09 Mar, 2021 2 commits