1. 26 Aug, 2020 1 commit
  2. 19 Aug, 2020 1 commit
  3. 14 Aug, 2020 1 commit
  4. 12 Aug, 2020 1 commit
  5. 10 Aug, 2020 1 commit
  6. 04 Aug, 2020 1 commit
    • Andrés Felipe Cruz's avatar
      Encoder decoder config docs (#6195) · 7ea9b2db
      Andrés Felipe Cruz authored
      
      
      * Adding docs for how to load encoder_decoder pretrained model with individual config objects
      
      * Adding docs for loading encoder_decoder config from pretrained folder
      
      * Fixing  W293 blank line contains whitespace
      
      * Update src/transformers/modeling_encoder_decoder.py
      
      * Update src/transformers/modeling_encoder_decoder.py
      
      * Update src/transformers/modeling_encoder_decoder.py
      
      * Apply suggestions from code review
      
      model file should only show examples for how to load save model
      
      * Update src/transformers/configuration_encoder_decoder.py
      
      * Update src/transformers/configuration_encoder_decoder.py
      
      * fix space
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      7ea9b2db
  7. 31 Jul, 2020 1 commit
  8. 30 Jul, 2020 1 commit
    • Sylvain Gugger's avatar
      Switch from return_tuple to return_dict (#6138) · 91cb9546
      Sylvain Gugger authored
      
      
      * Switch from return_tuple to return_dict
      
      * Fix test
      
      * [WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC… (#5614)
      
      * Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleChoice} models and tests
      
      * AutoModels
      
      
      Tiny tweaks
      
      * Style
      
      * Final changes before merge
      
      * Re-order for simpler review
      
      * Final fixes
      
      * Addressing @sgugger's comments
      
      * Test MultipleChoice
      
      * Rework TF trainer (#6038)
      
      * Fully rework training/prediction loops
      
      * fix method name
      
      * Fix variable name
      
      * Fix property name
      
      * Fix scope
      
      * Fix method name
      
      * Fix tuple index
      
      * Fix tuple index
      
      * Fix indentation
      
      * Fix variable name
      
      * fix eval before log
      
      * Add drop remainder for test dataset
      
      * Fix step number + fix logging datetime
      
      * fix eval loss value
      
      * use global step instead of step + fix logging at step 0
      
      * Fix logging datetime
      
      * Fix global_step usage
      
      * Fix breaking loop + logging datetime
      
      * Fix step in prediction loop
      
      * Fix step breaking
      
      * Fix train/test loops
      
      * Force TF at least 2.2 for the trainer
      
      * Use assert_cardinality to facilitate the dataset size computation
      
      * Log steps per epoch
      
      * Make tfds compliant with TPU
      
      * Make tfds compliant with TPU
      
      * Use TF dataset enumerate instead of the Python one
      
      * revert previous commit
      
      * Fix data_dir
      
      * Apply style
      
      * rebase on master
      
      * Address Sylvain's comments
      
      * Address Sylvain's and Lysandre comments
      
      * Trigger CI
      
      * Remove unused import
      
      * Switch from return_tuple to return_dict
      
      * Fix test
      
      * Add recent model
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarJulien Plu <plu.julien@gmail.com>
      91cb9546
  9. 28 Jul, 2020 1 commit
  10. 10 Jul, 2020 1 commit
    • Sylvain Gugger's avatar
      Change model outputs types to self-document outputs (#5438) · edfd82f5
      Sylvain Gugger authored
      * [WIP] Proposal for model outputs
      
      * All Bert models
      
      * Make CI green maybe?
      
      * Fix ONNX test
      
      * Isolate ModelOutput from pt and tf
      
      * Formatting
      
      * Add Electra models
      
      * Auto-generate docstrings from outputs
      
      * Add TF outputs
      
      * Add some BERT models
      
      * Revert TF side
      
      * Remove last traces of TF changes
      
      * Fail with a clear error message
      
      * Add Albert and work through Bart
      
      * Add CTRL and DistilBert
      
      * Formatting
      
      * Progress on Bart
      
      * Renames and finish Bart
      
      * Formatting
      
      * Fix last test
      
      * Add DPR
      
      * Finish Electra and add FlauBERT
      
      * Add GPT2
      
      * Add Longformer
      
      * Add MMBT
      
      * Add MobileBert
      
      * Add GPT
      
      * Formatting
      
      * Add Reformer
      
      * Add Roberta
      
      * Add T5
      
      * Add Transformer XL
      
      * Fix test
      
      * Add XLM + fix XLMForTokenClassification
      
      * Style + XLMRoberta
      
      * Add XLNet
      
      * Formatting
      
      * Add doc of return_tuple arg
      edfd82f5
  11. 25 Jun, 2020 1 commit
  12. 17 Jun, 2020 1 commit
  13. 12 Jun, 2020 1 commit
  14. 10 Jun, 2020 1 commit
    • Sylvain Gugger's avatar
      Split LMBert model in two (#4874) · 1e2631d6
      Sylvain Gugger authored
      * Split LMBert model in two
      
      * Fix example
      
      * Remove lm_labels
      
      * Adapt tests, refactor prepare_for_generation
      
      * Fix merge
      
      * Hide BeartLMHeadModel
      1e2631d6
  15. 03 Jun, 2020 1 commit
  16. 29 May, 2020 1 commit
  17. 25 May, 2020 1 commit
  18. 28 Apr, 2020 2 commits
    • Patrick von Platen's avatar
      add examples to doc (#4045) · 9a0a8c1c
      Patrick von Platen authored
      9a0a8c1c
    • Patrick von Platen's avatar
      Clean Encoder-Decoder models with Bart/T5-like API and add generate possibility (#3383) · fa49b9af
      Patrick von Platen authored
      * change encoder decoder style to bart & t5 style
      
      * make encoder decoder generation dummy work for bert
      
      * make style
      
      * clean init config in encoder decoder
      
      * add tests for encoder decoder models
      
      * refactor and add last tests
      
      * refactor and add last tests
      
      * fix attn masks for bert encoder decoder
      
      * make style
      
      * refactor prepare inputs for Bert
      
      * refactor
      
      * finish encoder decoder
      
      * correct typo
      
      * add docstring to config
      
      * finish
      
      * add tests
      
      * better naming
      
      * make style
      
      * fix flake8
      
      * clean docstring
      
      * make style
      
      * rename
      fa49b9af
  19. 19 Mar, 2020 1 commit
  20. 26 Feb, 2020 1 commit
  21. 23 Feb, 2020 1 commit
  22. 20 Feb, 2020 1 commit
    • Sam Shleifer's avatar
      New BartModel (#2745) · 53ce3854
      Sam Shleifer authored
      * Results same as fairseq
      * Wrote a ton of tests
      * Struggled with api signatures
      * added some docs
      
      53ce3854
  23. 04 Feb, 2020 1 commit
  24. 23 Jan, 2020 1 commit
  25. 15 Jan, 2020 1 commit
  26. 06 Jan, 2020 2 commits
  27. 05 Jan, 2020 1 commit
  28. 22 Dec, 2019 5 commits
    • Aymeric Augustin's avatar
      Remove __future__ imports. · c824d15a
      Aymeric Augustin authored
      c824d15a
    • Aymeric Augustin's avatar
      Move source code inside a src subdirectory. · 6be7cdda
      Aymeric Augustin authored
      This prevents transformers from being importable simply because the CWD
      is the root of the git repository, while not being importable from other
      directories. That led to inconsistent behavior, especially in examples.
      
      Once you fetch this commit, in your dev environment, you must run:
      
          $ pip uninstall transformers
          $ pip install -e .
      6be7cdda
    • Aymeric Augustin's avatar
      Fix F401 flake8 warning (x88 / 116). · 783a6169
      Aymeric Augustin authored
      This change is mostly autogenerated with:
      
          $ python -m autoflake --in-place --recursive --remove-all-unused-imports --ignore-init-module-imports examples templates transformers utils hubconf.py setup.py
      
      I made minor changes in the generated diff.
      783a6169
    • Aymeric Augustin's avatar
      Fix F401 flake8 warning (x152 / 268). · 80327a13
      Aymeric Augustin authored
      This change is mostly autogenerated with:
      
          $ python -m autoflake --in-place --recursive examples templates transformers utils hubconf.py setup.py
      
      I made minor changes in the generated diff.
      80327a13
    • Aymeric Augustin's avatar
      Sort imports with isort. · 158e82e0
      Aymeric Augustin authored
      This is the result of:
      
          $ isort --recursive examples templates transformers utils hubconf.py setup.py
      158e82e0
  29. 21 Dec, 2019 1 commit
    • Aymeric Augustin's avatar
      Reformat source code with black. · fa84ae26
      Aymeric Augustin authored
      This is the result of:
      
          $ black --line-length 119 examples templates transformers utils hubconf.py setup.py
      
      There's a lot of fairly long lines in the project. As a consequence, I'm
      picking the longest widely accepted line length, 119 characters.
      
      This is also Thomas' preference, because it allows for explicit variable
      names, to make the code easier to understand.
      fa84ae26
  30. 20 Dec, 2019 2 commits
  31. 18 Dec, 2019 1 commit
  32. 12 Dec, 2019 1 commit
  33. 10 Dec, 2019 1 commit