1. 22 Mar, 2021 1 commit
  2. 19 Mar, 2021 1 commit
  3. 18 Mar, 2021 3 commits
    • Sylvain Gugger's avatar
      Fix distributed evaluation (#10795) · 008672e6
      Sylvain Gugger authored
      * Fix distributed evaluation
      
      * Use logger
      008672e6
    • Vimarsh Chaturvedi's avatar
      from_pretrained: check that the pretrained model is for the right model architecture (#10586) · 094afa51
      Vimarsh Chaturvedi authored
      
      
      * Added check to ensure model name passed to from_pretrained and model are the same
      
      * Added test to check from_pretrained throws assert error when passed an incompatiable model name
      
      * Modified assert in from_pretrained with f-strings. Modified test to ensure desired assert message is being generated
      
      * Added check to ensure config and model has model_type
      
      * Fix FlauBERT heads
      
      Co-authored-by: vimarsh chaturvedi <vimarsh chaturvedi>
      Co-authored-by: default avatarStas Bekman <stas@stason.org>
      Co-authored-by: default avatarLysandre <lysandre.debut@reseau.eseo.fr>
      094afa51
    • Patrick von Platen's avatar
      [Flax] Adapt Flax models to new structure (#9484) · 0b98ca36
      Patrick von Platen authored
      
      
      * Create modeling_flax_eletra with code copied from modeling_flax_bert
      
      * Add ElectraForMaskedLM and ElectraForPretraining
      
      * Add modeling test for Flax electra and fix naming and arg in Flax Electra model
      
      * Add documentation
      
      * Fix code style
      
      * Create modeling_flax_eletra with code copied from modeling_flax_bert
      
      * Add ElectraForMaskedLM and ElectraForPretraining
      
      * Add modeling test for Flax electra and fix naming and arg in Flax Electra model
      
      * Add documentation
      
      * Fix code style
      
      * Fix code quality
      
      * Adjust tol in assert_almost_equal due to very small difference between model output, ranging 0.0010 - 0.0016
      
      * Remove redundant ElectraPooler
      
      * save intermediate
      
      * adapt
      
      * correct bert flax design
      
      * adapt roberta as well
      
      * finish roberta flax
      
      * finish
      
      * apply suggestions
      
      * apply suggestions
      Co-authored-by: default avatarChris Nguyen <anhtu2687@gmail.com>
      0b98ca36
  4. 17 Mar, 2021 6 commits
    • Mansi Mane's avatar
      Smmp batch not divisible by microbatches fix (#10778) · 0282e24e
      Mansi Mane authored
      
      
      * Added debug prints
      
      * Added config
      
      * Added prints
      
      * Added prints
      
      * Added extra samples to SequentialDistributedSampler
      
      * Added extra samples to SequentialDistributedSampler
      
      Updated SequentialDistributedSampler call
      
      * Added deubg prints
      
      * Removed extra prints
      
      * Making predicitons and labels multiple of batchsize
      
      * updated number of microbatches
      
      * Removed extra prints
      
      * Made start_remainder similar to DistributedSamplerWithLoop
      
      * Minor spacing update
      
      * Added debug prints
      
      Added config
      
      Added prints
      
      Added prints
      
      * Added extra samples to SequentialDistributedSampler
      
      Updated SequentialDistributedSampler call
      
      Added extra samples to SequentialDistributedSampler
      
      Added deubg prints
      
      Removed extra prints
      
      Making predicitons and labels multiple of batchsize
      
      updated number of microbatches
      
      Removed extra prints
      
      Squashing redundant commits
      
      * Made start_remainder similar to DistributedSamplerWithLoop
      
      Minor spacing update
      
      Made start_remainder similar to DistributedSamplerWithLoop
      
      * Test and styling
      
      * Rename test
      Co-authored-by: default avatarSylvain Gugger <sylvain.gugger@gmail.com>
      0282e24e
    • Sylvain Gugger's avatar
      Check copies blackify (#10775) · 40b049c7
      Sylvain Gugger authored
      * Apply black before checking copies
      
      * Fix for class methods
      
      * Deal with lonely brackets
      
      * Remove debug and add forward changes
      
      * Separate copies and fix test
      
      * Add black as a test dependency
      40b049c7
    • Stas Bekman's avatar
    • Stas Bekman's avatar
      [DeepSpeed] improve checkpoint loading code plus tests (#10760) · cd8c93f7
      Stas Bekman authored
      * deepspeed checkpoint loading code plus tests
      
      * style
      
      * style
      cd8c93f7
    • Patrick von Platen's avatar
      small improvements (#10773) · 0486ccdd
      Patrick von Platen authored
      0486ccdd
    • Patrick von Platen's avatar
      up (#10771) · f20d75a1
      Patrick von Platen authored
      f20d75a1
  5. 16 Mar, 2021 3 commits
  6. 15 Mar, 2021 6 commits
  7. 12 Mar, 2021 3 commits
    • Lysandre Debut's avatar
      TensorFlow tests: having from_pt set to True requires torch to be installed. (#10664) · 184ef8ec
      Lysandre Debut authored
      * TF model exists for Blenderbot 400M
      
      * Marian
      
      * RAG
      184ef8ec
    • Nicolas Patry's avatar
      Adding new parameter to `generate`: `max_time`. (#9846) · 543d0549
      Nicolas Patry authored
      * [WIP] Adding new parameter to `generate`:  `max_time`.
      
      Generation by tokens number is sometimes a bit clunky because we don't
      know how many tokens are good enough or even how many tokens are in
      the payload (for pipelines users for instance). This leads to hard
      to understand behavior.
      
      This PR proposes a new argument `max_time` which is a float of seconds
      for the allowed time for `generate` to run on.
      Ideally combinations of `max_tokens=None`, `max_time=2` could be used to
      generate as many tokens as possible within time budget.
      
      NB: Another possible approach consists of passing a callback to `generate`
        putting the caller in charge of the actual decision of when to stop
        generating tokens. It opens the door to 'which args should we pass'
        to this callback. It's hard to imagine other use-cases for this
        early stopping behavior than time (that are not already covered by
        parameters of generate)
      
      * Revamp with StoppingCriteria
      
      * Removing deprecated mentions.
      
      * Forgot arguments to stopping criteria.
      
      * Readding max_length it's not just used as a stopping criteria.
      
      * Default value for `stopping_criteria`.
      
      * Address @patrickvonplaten comments.
      
      - More docstrings
      - Actual doc
      - Include in global namespace
      - Remove TF work.
      
      * Put back `max_length` (deprecation different PR).
      
      * Doc quality.
      
      * Fixing old behavior without `stopping_criteria` but with `max_length`.
      
      Making sure we don't break that in the future.
      
      * Adding more tests for possible inconsistencies between
      
      `max_length` and `stopping_criteria`.
      
      * Fixing the torch imports.
      543d0549
    • Lysandre Debut's avatar
      Adjust loss difference (#10669) · ea46e3fa
      Lysandre Debut authored
      ea46e3fa
  8. 11 Mar, 2021 5 commits
  9. 10 Mar, 2021 2 commits
    • Sylvain Gugger's avatar
      Copy tokenizer files in each of their repo (#10624) · 2295d783
      Sylvain Gugger authored
      * Move tokenizer files in each repo
      
      * Fix mBART50 tests
      
      * Fix mBART tests
      
      * Fix Marian tests
      
      * Update templates
      2295d783
    • Suraj Patil's avatar
      Speech2TextTransformer (#10175) · d26b37e7
      Suraj Patil authored
      
      
      * s2t
      
      * fix config
      
      * conversion script
      
      * fix import
      
      * add tokenizer
      
      * fix tok init
      
      * fix tokenizer
      
      * first version working
      
      * fix embeds
      
      * fix lm head
      
      * remove extra heads
      
      * fix convert script
      
      * handle encoder attn mask
      
      * style
      
      * better enc attn mask
      
      * override _prepare_attention_mask_for_generation
      
      * handle attn_maks in encoder and decoder
      
      * input_ids => input_features
      
      * enable use_cache
      
      * remove old code
      
      * expand embeddings if needed
      
      * remove logits bias
      
      * masked_lm_loss => loss
      
      * hack tokenizer to support feature processing
      
      * fix model_input_names
      
      * style
      
      * fix error message
      
      * doc
      
      * remove inputs_embeds
      
      * remove input_embeds
      
      * remove unnecessary docstring
      
      * quality
      
      * SpeechToText => Speech2Text
      
      * style
      
      * remove shared_embeds
      
      * subsample => conv
      
      * remove Speech2TextTransformerDecoderWrapper
      
      * update output_lengths formula
      
      * fix table
      
      * remove max_position_embeddings
      
      * update conversion scripts
      
      * add possibility to do upper case for now
      
      * add FeatureExtractor and Processor
      
      * add tests for extractor
      
      * require_torch_audio => require_torchaudio
      
      * add processor test
      
      * update import
      
      * remove classification head
      
      * attention mask is now 1D
      
      * update docstrings
      
      * attention mask should be of type long
      
      * handle attention mask from generate
      
      * alwyas return attention_mask
      
      * fix test
      
      * style
      
      * doc
      
      * Speech2TextTransformer => Speech2Text
      
      * Speech2TextTransformerConfig => Speech2TextConfig
      
      * remove dummy_inputs
      
      * nit
      
      * style
      
      * multilinguial tok
      
      * fix tokenizer
      
      * add tgt_lang setter
      
      * save lang_codes
      
      * fix tokenizer
      
      * add forced_bos_token_id to tokenizer
      
      * apply review suggestions
      
      * add torchaudio to extra deps
      
      * add speech deps to CI
      
      * fix dep
      
      * add libsndfile to ci
      
      * libsndfile1
      
      * add speech to extras all
      
      * libsndfile1 -> libsndfile1
      
      * libsndfile
      
      * libsndfile1-dev
      
      * apt update
      
      * add sudo to install
      
      * update deps table
      
      * install libsndfile1-dev on CI
      
      * tuple to list
      
      * init conv layer
      
      * add model tests
      
      * quality
      
      * add integration tests
      
      * skip_special_tokens
      
      * add speech_to_text_transformer in toctree
      
      * fix tokenizer
      
      * fix fp16 tests
      
      * add tokenizer tests
      
      * fix copyright
      
      * input_values => input_features
      
      * doc
      
      * add model in readme
      
      * doc
      
      * change checkpoint names
      
      * fix copyright
      
      * fix code example
      
      * add max_model_input_sizes in tokenizer
      
      * fix integration tests
      
      * add do_lower_case to tokenizer
      
      * remove clamp trick
      
      * fix "Add modeling imports here"
      
      * fix copyrights
      
      * fix tests
      
      * SpeechToTextTransformer => SpeechToText
      
      * fix naming
      
      * fix table formatting
      
      * fix typo
      
      * style
      
      * fix typos
      
      * remove speech dep from extras[testing]
      
      * fix copies
      
      * rename doc file,
      
      * put imports under is_torch_available
      
      * run feat extract tests when torch is available
      
      * dummy objects for processor and extractor
      
      * fix imports in tests
      
      * fix import in modeling test
      
      * fxi imports
      
      * fix torch import
      
      * fix imports again
      
      * fix positional embeddings
      
      * fix typo in import
      
      * adapt new extractor refactor
      
      * style
      
      * fix torchscript test
      
      * doc
      
      * doc
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * fix docs, copied from, style
      
      * fix docstring
      
      * handle imports
      
      * remove speech from all extra deps
      
      * remove s2t from seq2seq lm mapping
      
      * better names
      
      * skip training tests
      
      * add install instructions
      
      * List => Tuple
      
      * doc
      
      * fix conversion script
      
      * fix urls
      
      * add instruction for libsndfile
      
      * fix fp16 test
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      d26b37e7
  10. 09 Mar, 2021 4 commits
  11. 08 Mar, 2021 6 commits
    • Ratthachat (Jung)'s avatar
      Add TFRag (#9002) · 696e8a43
      Ratthachat (Jung) authored
      * Create modeling_tf_dpr.py
      
      * Add TFDPR
      
      * Add back TFPegasus, TFMarian, TFMBart, TFBlenderBot
      
      last commit accidentally deleted these 4 lines, so I recover them back
      
      * Add TFDPR
      
      * Add TFDPR
      
      * clean up some comments, add TF input-style doc string
      
      * Add TFDPR
      
      * Make return_dict=False as default
      
      * Fix return_dict bug (in .from_pretrained)
      
      * Add get_input_embeddings()
      
      * Create test_modeling_tf_dpr.py
      
      The current version is already passed all 27 tests!
      Please see the test run at : 
      https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing
      
      
      
      * fix quality
      
      * delete init weights
      
      * run fix copies
      
      * fix repo consis
      
      * del config_class, load_tf_weights
      
      They shoud be 'pytorch only'
      
      * add config_class back
      
      after removing it, test failed ... so totally only removing "use_tf_weights = None" on Lysandre suggestion
      
      * newline after .. note::
      
      * import tf, np (Necessary for ModelIntegrationTest)
      
      * slow_test from_pretrained with from_pt=True
      
      At the moment we don't have TF weights (since we don't have official official TF model)
      Previously, I did not run slow test, so I missed this bug
      
      * Add simple TFDPRModelIntegrationTest
      
      Note that this is just a test that TF and Pytorch gives approx. the same output.
      However, I could not test with the official DPR repo's output yet
      
      * upload correct tf model
      
      * remove position_ids as missing keys
      
      * create modeling_tf_rag
      
      * add tests for tf
      
      * add tf tests
      
      * revert wrong pt commit
      
      * further refactor
      
      * further refactor
      
      * refactor
      
      * Update modeling_tf_rag.py
      
      - input_processing
      - fix prepare_input_for_generation (mostly fix generate bug)
      - bring back from_pretrained hack in order to test generate
      
      * delete colab pieces of code
      
      * Show case of greedy "generate"
      
      Temporarily change from beam_search test to greedy_search test to show case that TF and PT do get equivalent output.
      
      * cosmetic update
      
      * correct typos
      
      * update
      
      * push some progress
      
      * make easy check
      
      * fix rag save from pretrained
      
      * Update src/transformers/modeling_tf_utils.py
      
      * remove commented out lines
      
      * delete unnecessary lines
      
      * add simple test case for nq_checkpoint
      
      Add nq_checkpoint test to show that current version without hack still fails
      
      * temporarily put ugly hack back again
      
      * Add TFRagSequenceForGeneration!!
      
      * __init__.py , import TFRagSequenceForGeneration
      
      * Add TFRagSequence tests!
      
      * rag init.py - add TFRagSequenceForGeneration
      
      * fix from_pretrained
      
      * fix prepare_inputs_for_generation
      
      * Beam search for RagToken!
      
      * minor clean up
      
      * add tf.cast in TFRagModel
      
      * More tf.cast
      
      * Add all remaining tests (still have issues)
      
      * delete all T5 related
      
      * make style
      
      * fix load weight prefix
      
      * fix bart
      
      * fix return_dict for tf_rag
      
      make all tests pass .. Hooray
      
      * fix some tests
      
      * fix code quality
      
      * fix qualtiy check
      
      * finish tests tf rag
      
      * add tf rag to docs
      
      * remove TFT5 from docstring
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * remove TFT5 from docstring
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Delete outdated comments
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * improve doc strings
      
      * add generative model classes
      
      * fix adjust token logic
      
      * refactor generate for TFRag
      
      * using shape_list, not _get_shape
      Co-authored-by: default avatarJulien Plu <plu.julien@gmail.com>
      
      * axis=[1]->axis=1
      
      * delete NEED_HELP comment
      
      * improve readability
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * improve readability
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * improve readability
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Indicating model is in a developing state in docstrings
      
      As suggested by Julien
      
      * small last changes
      
      * apply sylvains suggestions
      
      * finish tf rag
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      Co-authored-by: default avatarpatrickvonplaten <patrick@huggingface.co>
      Co-authored-by: default avatarJulien Plu <plu.julien@gmail.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      696e8a43
    • Sylvain Gugger's avatar
      Check layer types for Optimizer construction (#10598) · 3ced9b3e
      Sylvain Gugger authored
      * Check layer types for Optimizer construction
      
      * Duplicate class
      3ced9b3e
    • Sylvain Gugger's avatar
      Revert "Tests" · 821d518e
      Sylvain Gugger authored
      This reverts commit b35e7b68.
      821d518e
    • Sylvain Gugger's avatar
      Revert "Style" · 4196bfed
      Sylvain Gugger authored
      This reverts commit a8ec52ef.
      4196bfed
    • Sylvain Gugger's avatar
      Style · a8ec52ef
      Sylvain Gugger authored
      a8ec52ef
    • Sylvain Gugger's avatar
      Tests · b35e7b68
      Sylvain Gugger authored
      b35e7b68