"git@developer.sourcefind.cn:Fzc7075/nunchaku.git" did not exist on "9e95bfe2512e221aea010ec1261f66a954638734"
  1. 10 Jun, 2020 1 commit
  2. 09 Jun, 2020 6 commits
  3. 08 Jun, 2020 7 commits
  4. 07 Jun, 2020 1 commit
  5. 05 Jun, 2020 6 commits
  6. 04 Jun, 2020 5 commits
    • Julien Plu's avatar
      Tensorflow improvements (#4530) · f9414f75
      Julien Plu authored
      
      
      * Better None gradients handling
      
      * Apply Style
      
      * Apply Style
      
      * Create a loss class per task to compute its respective loss
      
      * Add loss classes to the ALBERT TF models
      
      * Add loss classes to the BERT TF models
      
      * Add question answering and multiple choice to TF Camembert
      
      * Remove prints
      
      * Add multiple choice model to TF DistilBERT + loss computation
      
      * Add question answering model to TF Electra + loss computation
      
      * Add token classification, question answering and multiple choice models to TF Flaubert
      
      * Add multiple choice model to TF Roberta + loss computation
      
      * Add multiple choice model to TF XLM + loss computation
      
      * Add multiple choice and question answering models to TF XLM-Roberta
      
      * Add multiple choice model to TF XLNet + loss computation
      
      * Remove unused parameters
      
      * Add task loss classes
      
      * Reorder TF imports + add new model classes
      
      * Add new model classes
      
      * Bugfix in TF T5 model
      
      * Bugfix for TF T5 tests
      
      * Bugfix in TF T5 model
      
      * Fix TF T5 model tests
      
      * Fix T5 tests + some renaming
      
      * Fix inheritance issue in the AutoX tests
      
      * Add tests for TF Flaubert and TF XLM Roberta
      
      * Add tests for TF Flaubert and TF XLM Roberta
      
      * Remove unused piece of code in the TF trainer
      
      * bugfix and remove unused code
      
      * Bugfix for TF 2.2
      
      * Apply Style
      
      * Divide TFSequenceClassificationAndMultipleChoiceLoss into their two respective name
      
      * Apply style
      
      * Mirror the PT Trainer in the TF one: fp16, optimizers and tb_writer as class parameter and better dataset handling
      
      * Fix TF optimizations tests and apply style
      
      * Remove useless parameter
      
      * Bugfix and apply style
      
      * Fix TF Trainer prediction
      
      * Now the TF models return the loss such as their PyTorch couterparts
      
      * Apply Style
      
      * Ignore some tests output
      
      * Take into account the SQuAD cls_index, p_mask and is_impossible parameters for the QuestionAnswering task models.
      
      * Fix names for SQuAD data
      
      * Apply Style
      
      * Fix conflicts with 2.11 release
      
      * Fix conflicts with 2.11
      
      * Fix wrongname
      
      * Add better documentation on the new create_optimizer function
      
      * Fix isort
      
      * logging_dir: use same default as PyTorch
      Co-authored-by: default avatarJulien Chaumond <chaumond@gmail.com>
      f9414f75
    • Setu Shah's avatar
      Add drop_last arg for data loader · 0e1869cc
      Setu Shah authored
      0e1869cc
    • Sylvain Gugger's avatar
    • Sam Shleifer's avatar
    • Funtowicz Morgan's avatar
      Introduce a new tensor type for return_tensors on tokenizer for NumPy (#4585) · 5bf9afbf
      Funtowicz Morgan authored
      * Refactor tensor creation in tokenizers.
      
      * Make sure to convert string to TensorType
      
      * Refactor convert_to_tensors_
      
      * Introduce numpy tensor creation
      
      * Format
      
      * Add unittest for TensorType creation from str
      
      * sorting imports
      
      * Added unittests for numpy tensor conversion.
      
      * Do not use in-place version for squeeze as numpy doesn't provide such feature.
      
      * Added extra parameter prepend_batch_axis: bool on prepare_for_model.
      
      * Ensure test_np_encode_plus_sent_to_model is not executed if encoder/decoder model.
      
      * style.
      
      * numpy tests require_torch for now while flax not merged.
      
      * Hopefully will make flake8 happy.
      
      * One more time 馃幎
      5bf9afbf
  7. 03 Jun, 2020 5 commits
  8. 02 Jun, 2020 7 commits
  9. 01 Jun, 2020 2 commits