1. 05 Aug, 2020 1 commit
    • Sylvain Gugger's avatar
      Tf model outputs (#6247) · c67d1a02
      Sylvain Gugger authored
      * TF outputs and test on BERT
      
      * Albert to DistilBert
      
      * All remaining TF models except T5
      
      * Documentation
      
      * One file forgotten
      
      * TF outputs and test on BERT
      
      * Albert to DistilBert
      
      * All remaining TF models except T5
      
      * Documentation
      
      * One file forgotten
      
      * Add new models and fix issues
      
      * Quality improvements
      
      * Add T5
      
      * A bit of cleanup
      
      * Fix for slow tests
      
      * Style
      c67d1a02
  2. 07 Jul, 2020 1 commit
  3. 03 Jul, 2020 1 commit
  4. 01 Jul, 2020 1 commit
  5. 26 Jun, 2020 1 commit
  6. 24 Jun, 2020 1 commit
  7. 16 Jun, 2020 1 commit
  8. 01 May, 2020 1 commit
    • Julien Chaumond's avatar
      [ci] Load pretrained models into the default (long-lived) cache · f54dc3f4
      Julien Chaumond authored
      There's an inconsistency right now where:
      - we load some models into CACHE_DIR
      - and some models in the default cache
      - and often, in both for the same models
      
      When running the RUN_SLOW tests, this takes a lot of disk space, time, and bandwidth.
      
      I'd rather always use the default cache
      f54dc3f4
  9. 17 Apr, 2020 1 commit
  10. 16 Apr, 2020 1 commit
    • Patrick von Platen's avatar
      [TFT5, Cache] Add cache to TFT5 (#3772) · 38f7461d
      Patrick von Platen authored
      * correct gpt2 test inputs
      
      * make style
      
      * delete modeling_gpt2 change in test file
      
      * translate from pytorch
      
      * correct tests
      
      * fix conflicts
      
      * fix conflicts
      
      * fix conflicts
      
      * fix conflicts
      
      * make tensorflow t5 caching work
      
      * make style
      
      * clean reorder cache
      
      * remove unnecessary spaces
      
      * fix test
      38f7461d
  11. 01 Apr, 2020 2 commits
  12. 30 Mar, 2020 1 commit
    • Patrick von Platen's avatar
      [T5] make decoder input ids optional for t5 training (#3521) · 75ec6c9e
      Patrick von Platen authored
      * make decoder input ids optional for t5 training
      
      * lm_lables should not be shifted in t5
      
      * add tests
      
      * finish shift right functionality for PT T5
      
      * move shift right to correct class
      
      * cleaner code
      
      * replace -100 values with pad token id
      
      * add assert statement
      
      * remove unnecessary for loop
      
      * make style
      75ec6c9e
  13. 19 Mar, 2020 1 commit
    • Patrick von Platen's avatar
      Support T5 Generation (#3228) · bbf26c4e
      Patrick von Platen authored
      
      
      * fix conflicts
      
      * update bart max length test
      
      * correct spelling mistakes
      
      * implemented model specific encode function
      
      * fix merge conflicts
      
      * better naming
      
      * save intermediate state -> need to rethink strucuture a bit
      
      * leave tf problem as it is for now
      
      * current version
      
      * add layers.pop
      
      * remove ipdb
      
      * make style
      
      * clean return cut decoding
      
      * remove ipdbs
      
      * Fix restoring layers in the decoders that doesnt exists.
      
      * push good intermediate solution for now
      
      * fix conflicts
      
      * always good to refuse to merge conflicts when rebasing
      
      * fix small bug
      
      * improve function calls
      
      * remove unused file
      
      * add correct scope behavior for t5_generate
      Co-authored-by: default avatarMorgan Funtowicz <funtowiczmo@gmail.com>
      bbf26c4e
  14. 06 Jan, 2020 2 commits
  15. 22 Dec, 2019 8 commits
  16. 21 Dec, 2019 2 commits
    • Aymeric Augustin's avatar
      Reformat source code with black. · fa84ae26
      Aymeric Augustin authored
      This is the result of:
      
          $ black --line-length 119 examples templates transformers utils hubconf.py setup.py
      
      There's a lot of fairly long lines in the project. As a consequence, I'm
      picking the longest widely accepted line length, 119 characters.
      
      This is also Thomas' preference, because it allows for explicit variable
      names, to make the code easier to understand.
      fa84ae26
    • Aymeric Augustin's avatar
      Take advantage of the cache when running tests. · b670c266
      Aymeric Augustin authored
      Caching models across test cases and across runs of the test suite makes
      slow tests somewhat more bearable.
      
      Use gettempdir() instead of /tmp in tests. This makes it easier to
      change the location of the cache with semi-standard TMPDIR/TEMP/TMP
      environment variables.
      
      Fix #2222.
      b670c266
  17. 16 Dec, 2019 1 commit
  18. 10 Dec, 2019 1 commit
  19. 08 Nov, 2019 1 commit
  20. 06 Nov, 2019 1 commit
  21. 09 Oct, 2019 1 commit
  22. 08 Oct, 2019 1 commit
  23. 04 Oct, 2019 1 commit
    • keskarnitish's avatar
      Adding CTRL (squashed commit) · dbed1c5d
      keskarnitish authored
      adding conversion script
      
      adding first draft of modeling & tokenization
      
      adding placeholder for test files
      
      bunch of changes
      
      registering the tokenizer/model/etc
      
      tests
      
      change link; something is very VERY wrong here
      
      weird end-of-word thingy going on
      
      i think the tokenization works now ; wrote the unit tests
      
      overall structure works;load w next
      
      the monster is alive!
      
      works after some cleanup as well
      
      adding emacs autosave to gitignore
      
      currently only supporting the 48 layer one; seems to infer fine on my macbook
      
      cleanup
      
      fixing some documentation
      
      fixing some documentation
      
      tests passing?
      
      now works on CUDA also
      
      adding greedy?
      
      adding greedy sampling
      
      works well
      dbed1c5d
  24. 26 Sep, 2019 1 commit
  25. 09 Sep, 2019 3 commits
  26. 08 Sep, 2019 2 commits
  27. 05 Sep, 2019 1 commit