1. 09 Jun, 2020 5 commits
  2. 08 Jun, 2020 9 commits
  3. 07 Jun, 2020 1 commit
  4. 06 Jun, 2020 3 commits
  5. 05 Jun, 2020 13 commits
  6. 04 Jun, 2020 9 commits
    • Julien Plu's avatar
      Tensorflow improvements (#4530) · f9414f75
      Julien Plu authored
      
      
      * Better None gradients handling
      
      * Apply Style
      
      * Apply Style
      
      * Create a loss class per task to compute its respective loss
      
      * Add loss classes to the ALBERT TF models
      
      * Add loss classes to the BERT TF models
      
      * Add question answering and multiple choice to TF Camembert
      
      * Remove prints
      
      * Add multiple choice model to TF DistilBERT + loss computation
      
      * Add question answering model to TF Electra + loss computation
      
      * Add token classification, question answering and multiple choice models to TF Flaubert
      
      * Add multiple choice model to TF Roberta + loss computation
      
      * Add multiple choice model to TF XLM + loss computation
      
      * Add multiple choice and question answering models to TF XLM-Roberta
      
      * Add multiple choice model to TF XLNet + loss computation
      
      * Remove unused parameters
      
      * Add task loss classes
      
      * Reorder TF imports + add new model classes
      
      * Add new model classes
      
      * Bugfix in TF T5 model
      
      * Bugfix for TF T5 tests
      
      * Bugfix in TF T5 model
      
      * Fix TF T5 model tests
      
      * Fix T5 tests + some renaming
      
      * Fix inheritance issue in the AutoX tests
      
      * Add tests for TF Flaubert and TF XLM Roberta
      
      * Add tests for TF Flaubert and TF XLM Roberta
      
      * Remove unused piece of code in the TF trainer
      
      * bugfix and remove unused code
      
      * Bugfix for TF 2.2
      
      * Apply Style
      
      * Divide TFSequenceClassificationAndMultipleChoiceLoss into their two respective name
      
      * Apply style
      
      * Mirror the PT Trainer in the TF one: fp16, optimizers and tb_writer as class parameter and better dataset handling
      
      * Fix TF optimizations tests and apply style
      
      * Remove useless parameter
      
      * Bugfix and apply style
      
      * Fix TF Trainer prediction
      
      * Now the TF models return the loss such as their PyTorch couterparts
      
      * Apply Style
      
      * Ignore some tests output
      
      * Take into account the SQuAD cls_index, p_mask and is_impossible parameters for the QuestionAnswering task models.
      
      * Fix names for SQuAD data
      
      * Apply Style
      
      * Fix conflicts with 2.11 release
      
      * Fix conflicts with 2.11
      
      * Fix wrongname
      
      * Add better documentation on the new create_optimizer function
      
      * Fix isort
      
      * logging_dir: use same default as PyTorch
      Co-authored-by: default avatarJulien Chaumond <chaumond@gmail.com>
      f9414f75
    • Théophile Blard's avatar
    • Stefan Schweter's avatar
      NER: Add new WNUT’17 example (#4681) · 2a4b9e09
      Stefan Schweter authored
      * ner: add preprocessing script for examples that splits longer sentences
      
      * ner: example shell scripts use local preprocessing now
      
      * ner: add new example section for WNUT’17 NER task. Remove old English CoNLL-03 results
      
      * ner: satisfy black and isort
      2a4b9e09
    • Setu Shah's avatar
      Add drop_last arg for data loader · 0e1869cc
      Setu Shah authored
      0e1869cc
    • prajjwal1's avatar
      48a05026
    • Sylvain Gugger's avatar
    • Manuel Romero's avatar
    • Oren Amsalem's avatar
      Create README.md (#4743) · fb52143c
      Oren Amsalem authored
      fb52143c
    • Suraj Parmar's avatar
      Model Card for RoBERTa trained on Sanskrit (#4763) · 5f077a34
      Suraj Parmar authored
      * Model cad for SanBERTa
      
      Model Card for RoBERTa trained on Sanskrit
      
      * Model card for SanBERTa
      
      model card for RoBERTa trained on Sanskrit
      5f077a34