1. 03 Jan, 2022 3 commits
  2. 30 Dec, 2021 6 commits
  3. 29 Dec, 2021 2 commits
  4. 28 Dec, 2021 8 commits
  5. 27 Dec, 2021 8 commits
  6. 24 Dec, 2021 1 commit
  7. 23 Dec, 2021 12 commits
    • Patrick von Platen's avatar
      [WavLM] fix wavlm docs (#14910) · 11682990
      Patrick von Platen authored
      11682990
    • Stas Bekman's avatar
      [doc] install - add jax (#14912) · 41581066
      Stas Bekman authored
      As `jax` cuda requires special instructions to be installed correctly add a link to jax installation instructions. 
      
      Note: Flax install page only covers cpu jax installation info.
      41581066
    • Sylvain Gugger's avatar
      Better logic for getting tokenizer config in AutoTokenizer (#14906) · 676643c6
      Sylvain Gugger authored
      * Better logic for getting tokenizer config in AutoTokenizer
      
      * Remove needless import
      
      * Remove debug statement
      
      * Address review comments
      676643c6
    • Sylvain Gugger's avatar
      Fix failing GPU trainer tests (#14903) · f566c6e3
      Sylvain Gugger authored
      * Fix failing GPU trainer tests
      
      * Remove print statements
      f566c6e3
    • Patrick von Platen's avatar
      [Generate] Remove attention_mask and integrate model_main_input_name (#14856) · fe4197ab
      Patrick von Platen authored
      * up
      
      * save
      
      * correct
      
      * up
      
      * correct more
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * correct
      
      * fix tf
      
      * fix
      
      * remove tokenizer
      fe4197ab
    • Stas Bekman's avatar
      [doc] post-porting (#14890) · 86b40073
      Stas Bekman authored
      found a few oddities:
      
      1. https://huggingface.co/docs/transformers/main_classes/logging#transformers.utils.logging.enable_explicit_format
      has a :: - this PR fixes it
      
      2.  this looks borked too:
      https://huggingface.co/docs/transformers/main_classes/logging#transformers.utils.logging.set_verbosity
       has a <
      
      but I'm not sure where this one is coming from
      86b40073
    • Anton Lozhkov's avatar
      ee55ea69
    • Patrick von Platen's avatar
    • Yih-Dar's avatar
      Add TFCLIPModel (#13967) · 8f2cc1c3
      Yih-Dar authored
      
      
      * Start the work for TFCLIPModel
      
      * Convert to TF code (TODO: loss + doc)
      
      * Clean up
      
      * Fix pooled_output for TFCLIPTextTransformer - using tf.gather_nd
      
      * assert -> raise error
      
      * Expose TFCLIPModel
      
      * Deal with dummy_inputs
      
      * Add tests
      
      * Fix all tests. TODO: manual check weight loading + add more comments
      
      * Fix pt tf equivalence test
      
      * fixes
      
      * update TFCLIPVisionEmbeddings's Conv2D
      
      * Fix loss + overwrite test_pt_tf_model_equivalence from common
      
      * Add a comment about the change about MainLayer in test_keras_save_load
      
      * Set return_loss=True in TFCLIPModelTester + make tests pass
      
      * overwrite test_pt_tf_model_equivalence from tf common
      
      * fix base_model_prefix
      
      * Fix examples
      
      * remove unused
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * apply review suggestions
      
      * change self.pre_layrnorm to self.pre_layernorm
      
      * apply more review suggestions
      
      * return attention probs before dropout (to align with PT)
      
      * fix weight init
      
      * fix
      
      * build doc
      
      * fix missing doc
      
      * fix for test
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      8f2cc1c3
    • Yang Dong's avatar
      Set `run_name` in MLflowCallback (#14894) · 2d30443c
      Yang Dong authored
      * Set run_name in MLflowCallback
      
      * Update the docs for `run_name` argument
      2d30443c
    • Leandro von Werra's avatar
    • lewtun's avatar
      Add ONNX support for MarianMT models (#14586) · 6b655cc6
      lewtun authored
      * First commit to add MarianMT to ONNX
      
      * Now MarianModel.forward() automatically generates decoder_input_ids, like BartModel.forward()
      
      * Adjusted MarianOnnxConfig.inputs and outputs to work with seq2seq-lm feature
      
      * Style fix
      
      * Added support for other features for already supported models
      
      * Partial support for causal and seq2seq models
      
      * Partial support for causal and seq2seq models
      
      * Add default task for MarianMT ONNX
      
      * Remove automatic creation of decoder_input_ids
      
      * Extend inputs and outputs for MarianMT ONNX config
      
      * Add MarianMT to ONNX unit tests
      
      * Refactor
      
      * OnnxSeq2SeqConfigWithPast to support seq2seq models
      
      * Parameterized the onnx tests
      
      * Restored run_mlm.py
      
      * Restored run_mlm.py
      
      * [WIP] BART update
      
      * BART and MBART
      
      * Add past_key_values and fix dummy decoder inputs
      
      Using a sequence length of 1 in generate_dummy_outputs() produces large discrepancies, presumably due to some hidden optimisations.
      
      * Refactor MarianOnnxConfig to remove custom past_key_values logic
      
      * Fix quality
      
      * Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
      
      This reverts commit 0f4e39c5.
      
      * is_torch_available test to avoid failing imports
      
      * sorting parameterize parameters to solve ERROR gw0 gw1
      
      * tests fix
      
      * tests fix
      
      * GPT2 with past fix
      
      * Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
      
      * Removed onnx file
      
      * Refactor Marian export to account for base changes
      
      * Fix copies
      
      * Implemented suggestions
      
      * Extend support for causal LM
      
      * Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
      
      This reverts commit 0f4e39c5.
      
      * is_torch_available test to avoid failing imports
      
      * sorting parameterize parameters to solve ERROR gw0 gw1
      
      * tests fix
      
      * tests fix
      
      * GPT2 with past fix
      
      * Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
      
      * Removed onnx file
      
      * Implemented suggestions
      
      * Fixed __init__ to resolve conflict with master
      
      * Revert "Revert "Added support for other features for already supported models (#14358)" (#14679)"
      
      This reverts commit 0f4e39c5
      
      .
      
      * is_torch_available test to avoid failing imports
      
      * sorting parameterize parameters to solve ERROR gw0 gw1
      
      * tests fix
      
      * tests fix
      
      * GPT2 with past fix
      
      * Fixed stateful class attribute change that was breaking things when converting multiple models sequentially
      
      * Removed onnx file
      
      * Implemented suggestions
      
      * Fixed __init__ to resolve conflict with master
      
      * Remove commented import
      
      * Remove ONNX model
      
      * Remove redundant class method
      
      * Tidy up imports
      
      * Fix quality
      
      * Refactor dummy input function
      
      * Add copied from statements to Marian config functions
      
      * Remove false copied from comments
      
      * Fix copy from comment
      Co-authored-by: default avatarMassimiliano Bruni <massimiliano.bruni@hcl.com>
      Co-authored-by: default avatarMichael Benayoun <mickbenayoun@gmail.com>
      6b655cc6