1. 26 Jun, 2020 1 commit
  2. 10 Jun, 2020 1 commit
  3. 15 May, 2020 1 commit
  4. 14 May, 2020 1 commit
  5. 12 May, 2020 1 commit
    • Viktor Alm's avatar
      Add MultipleChoice to TFTrainer [WIP] (#4270) · e4512aab
      Viktor Alm authored
      
      
      * catch gpu len 1 set to gpu0
      
      * Add mpc to trainer
      
      * Add MPC for TF
      
      * fix TF automodel for MPC and add Albert
      
      * Apply style
      
      * Fix import
      
      * Note to self: double check
      
      * Make shape None, None for datasetgenerator output shapes
      
      * Add from_pt bool which doesnt seem to work
      
      * Original checkpoint dir
      
      * Fix docstrings for automodel
      
      * Update readme and apply style
      
      * Colab should probably not be from users
      
      * Colabs should probably not be from users
      
      * Add colab
      
      * Update README.md
      
      * Update README.md
      
      * Cleanup __intit__
      
      * Cleanup flake8 trailing comma
      
      * Update src/transformers/training_args_tf.py
      
      * Update src/transformers/modeling_tf_auto.py
      Co-authored-by: default avatarViktor Alm <viktoralm@pop-os.localdomain>
      Co-authored-by: default avatarJulien Chaumond <chaumond@gmail.com>
      e4512aab
  6. 07 May, 2020 1 commit
    • Julien Chaumond's avatar
      BIG Reorganize examples (#4213) · 0ae96ff8
      Julien Chaumond authored
      * Created using Colaboratory
      
      * [examples] reorganize files
      
      * remove run_tpu_glue.py as superseded by TPU support in Trainer
      
      * Bugfix: int, not tuple
      
      * move files around
      0ae96ff8
  7. 22 Apr, 2020 1 commit
    • Julien Chaumond's avatar
      Trainer (#3800) · dd9d483d
      Julien Chaumond authored
      * doc
      
      * [tests] Add sample files for a regression task
      
      * [HUGE] Trainer
      
      * Feedback from @sshleifer
      
      * Feedback from @thomwolf + logging tweak
      
      * [file_utils] when downloading concurrently, get_from_cache will use the cached file for subsequent processes
      
      * [glue] Use default max_seq_length of 128 like before
      
      * [glue] move DataTrainingArguments around
      
      * [ner] Change interface of InputExample, and align run_{tf,pl}
      
      * Re-align the pl scripts a little bit
      
      * ner
      
      * [ner] Add integration test
      
      * Fix language_modeling with API tweak
      
      * [ci] Tweak loss target
      
      * Don't break console output
      
      * amp.initialize: model must be on right device before
      
      * [multiple-choice] update for Trainer
      
      * Re-align to 827d6d6e
      dd9d483d
  8. 26 Mar, 2020 1 commit
  9. 06 Jan, 2020 2 commits
  10. 22 Dec, 2019 5 commits
  11. 21 Dec, 2019 1 commit
    • Aymeric Augustin's avatar
      Reformat source code with black. · fa84ae26
      Aymeric Augustin authored
      This is the result of:
      
          $ black --line-length 119 examples templates transformers utils hubconf.py setup.py
      
      There's a lot of fairly long lines in the project. As a consequence, I'm
      picking the longest widely accepted line length, 119 characters.
      
      This is also Thomas' preference, because it allows for explicit variable
      names, to make the code easier to understand.
      fa84ae26
  12. 04 Oct, 2019 1 commit
  13. 30 Sep, 2019 2 commits
  14. 18 Sep, 2019 1 commit