1. 10 Aug, 2020 4 commits
  2. 07 Aug, 2020 2 commits
  3. 06 Aug, 2020 1 commit
    • Philip May's avatar
      Add strip_accents to basic BertTokenizer. (#6280) · d5bc32ce
      Philip May authored
      * Add strip_accents to basic tokenizer
      
      * Add tests for strip_accents.
      
      * fix style with black
      
      * Fix strip_accents test
      
      * empty commit to trigger CI
      
      * Improved strip_accents check
      
      * Add code quality with is not False
      d5bc32ce
  4. 05 Aug, 2020 2 commits
    • Sylvain Gugger's avatar
      Tf model outputs (#6247) · c67d1a02
      Sylvain Gugger authored
      * TF outputs and test on BERT
      
      * Albert to DistilBert
      
      * All remaining TF models except T5
      
      * Documentation
      
      * One file forgotten
      
      * TF outputs and test on BERT
      
      * Albert to DistilBert
      
      * All remaining TF models except T5
      
      * Documentation
      
      * One file forgotten
      
      * Add new models and fix issues
      
      * Quality improvements
      
      * Add T5
      
      * A bit of cleanup
      
      * Fix for slow tests
      
      * Style
      c67d1a02
    • Julien Plu's avatar
      Add SequenceClassification and MultipleChoice TF models to Electra (#6227) · 33966811
      Julien Plu authored
      * Add SequenceClassification and MultipleChoice TF models to Electra
      
      * Apply style
      
      * Add summary_proj_to_labels to Electra config
      
      * Finally mirroring the PT version of these models
      
      * Apply style
      
      * Fix Electra test
      33966811
  5. 04 Aug, 2020 3 commits
  6. 03 Aug, 2020 1 commit
    • Julien Plu's avatar
      Fix saved model creation (#5468) · 9996f697
      Julien Plu authored
      * Fix TF Serving when output_hidden_states and output_attentions are True
      
      * Add tests for saved model creation + bug fix for multiple choices models
      
      * remove unused import
      
      * Fix the input for several layers
      
      * Fix test
      
      * Fix conflict printing
      
      * Apply style
      
      * Fix XLM and Flaubert for TensorFlow
      
      * Apply style
      
      * Fix TF check version
      
      * Apply style
      
      * Trigger CI
      9996f697
  7. 31 Jul, 2020 3 commits
  8. 30 Jul, 2020 3 commits
    • Stas Bekman's avatar
      typos (#6162) · a2f6d521
      Stas Bekman authored
      * 2 small typos
      
      * more typos
      
      * correct path
      a2f6d521
    • guillaume-be's avatar
      Addition of a DialoguePipeline (#5516) · e642c789
      guillaume-be authored
      
      
      * initial commit for pipeline implementation
      
      Addition of input processing and history concatenation
      
      * Conversation pipeline tested and working for single & multiple conversation inputs
      
      * Added docstrings for dialogue pipeline
      
      * Addition of dialogue pipeline integration tests
      
      * Delete test_t5.py
      
      * Fixed max code length
      
      * Updated styling
      
      * Fixed test broken by formatting tools
      
      * Removed unused import
      
      * Added unit test for DialoguePipeline
      
      * Fixed Tensorflow compatibility
      
      * Fixed multi-framework support using framework flag
      
      * - Fixed docstring
      - Added `min_length_for_response` as an initialization parameter
      - Renamed `*args` to `conversations`, `conversations` being a `Conversation` or a `List[Conversation]`
      - Updated truncation to truncate entire segments of conversations, instead of cutting in the middle of a user/bot input
      
      * - renamed pipeline name from dialogue to conversational
      - removed hardcoded default value of 1000 and use config.max_length instead
      - added `append_response` and `set_history` method to the Conversation class to avoid direct fields mutation
      - fixed bug in history truncation method
      
      * - Updated ConversationalPipeline to accept only active conversations (otherwise a ValueError is raised)
      
      * - Simplified input tensor conversion
      
      * - Updated attention_mask value for Tensorflow compatibility
      
      * - Updated last dialogue reference to conversational & fixed integration tests
      
      * Fixed conflict with master
      
      * Updates following review comments
      
      * Updated formatting
      
      * Added Conversation and ConversationalPipeline to the library __init__, addition of docstrings for Conversation, added both to the docs
      
      * Update src/transformers/pipelines.py
      
      Updated docsting following review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      e642c789
    • Sylvain Gugger's avatar
      Switch from return_tuple to return_dict (#6138) · 91cb9546
      Sylvain Gugger authored
      
      
      * Switch from return_tuple to return_dict
      
      * Fix test
      
      * [WIP] Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleC… (#5614)
      
      * Test TF Flaubert + Add {XLM, Flaubert}{TokenClassification, MultipleChoice} models and tests
      
      * AutoModels
      
      
      Tiny tweaks
      
      * Style
      
      * Final changes before merge
      
      * Re-order for simpler review
      
      * Final fixes
      
      * Addressing @sgugger's comments
      
      * Test MultipleChoice
      
      * Rework TF trainer (#6038)
      
      * Fully rework training/prediction loops
      
      * fix method name
      
      * Fix variable name
      
      * Fix property name
      
      * Fix scope
      
      * Fix method name
      
      * Fix tuple index
      
      * Fix tuple index
      
      * Fix indentation
      
      * Fix variable name
      
      * fix eval before log
      
      * Add drop remainder for test dataset
      
      * Fix step number + fix logging datetime
      
      * fix eval loss value
      
      * use global step instead of step + fix logging at step 0
      
      * Fix logging datetime
      
      * Fix global_step usage
      
      * Fix breaking loop + logging datetime
      
      * Fix step in prediction loop
      
      * Fix step breaking
      
      * Fix train/test loops
      
      * Force TF at least 2.2 for the trainer
      
      * Use assert_cardinality to facilitate the dataset size computation
      
      * Log steps per epoch
      
      * Make tfds compliant with TPU
      
      * Make tfds compliant with TPU
      
      * Use TF dataset enumerate instead of the Python one
      
      * revert previous commit
      
      * Fix data_dir
      
      * Apply style
      
      * rebase on master
      
      * Address Sylvain's comments
      
      * Address Sylvain's and Lysandre comments
      
      * Trigger CI
      
      * Remove unused import
      
      * Switch from return_tuple to return_dict
      
      * Fix test
      
      * Add recent model
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      Co-authored-by: default avatarJulien Plu <plu.julien@gmail.com>
      91cb9546
  9. 29 Jul, 2020 2 commits
  10. 28 Jul, 2020 3 commits
  11. 27 Jul, 2020 1 commit
    • Joe Davison's avatar
      Zero shot classification pipeline (#5760) · 3deffc1d
      Joe Davison authored
      * add initial zero-shot pipeline
      
      * change default args
      
      * update default template
      
      * add label string splitting
      
      * add str labels support, remove nli from name
      
      * style
      
      * add input validation and working tf defaults
      
      * tests
      
      * quality check
      
      * add docstring to __call__
      
      * add slow tests
      
      * Change truncation to only_first
      
      also lower precision on tests for readibility
      
      * style
      3deffc1d
  12. 23 Jul, 2020 2 commits
  13. 20 Jul, 2020 2 commits
    • Stas Bekman's avatar
      DataParallel fixes (#5733) · 35cb101e
      Stas Bekman authored
      * DataParallel fixes:
      
      1. switched to a more precise check
      -        if self.args.n_gpu > 1:
      +        if isinstance(model, nn.DataParallel):
      
      2. fix tests - require the same fixup under DataParallel as the training module
      
      * another fix
      35cb101e
    • Pradhy729's avatar
      Trainer support for iterabledataset (#5834) · 290b6e18
      Pradhy729 authored
      * Don't pass sampler for iterable dataset
      
      * Added check for test and eval dataloaders.
      
      * Formatting
      
      * Don't pass sampler for iterable dataset
      
      * Added check for test and eval dataloaders.
      
      * Formatting
      
      * Cleaner if nesting.
      
      * Added test for trainer and iterable dataset
      
      * Formatting for test
      
      * Fixed import when torch is available only.
      
      * Added require torch decorator to helper class
      
      * Moved dataset class inside unittest
      
      * Removed nested if and changed model in test
      
      * Checking torch availability for IterableDataset
      290b6e18
  14. 18 Jul, 2020 3 commits
    • Teven's avatar
      Xlnet outputs (#5883) · 4b506a37
      Teven authored
      Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
      4b506a37
    • Teven's avatar
      Revert "Xlnet outputs (#5881)" (#5882) · a5580924
      Teven authored
      This reverts commit 13be4872.
      a5580924
    • Teven's avatar
      Xlnet outputs (#5881) · 13be4872
      Teven authored
      Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
      13be4872
  15. 17 Jul, 2020 3 commits
    • Teven's avatar
      Revert "XLNet `use_cache` refactor (#5770)" (#5854) · 615be03f
      Teven authored
      This reverts commit 0b2da0e5.
      615be03f
    • Teven's avatar
      XLNet `use_cache` refactor (#5770) · 0b2da0e5
      Teven authored
      Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
      0b2da0e5
    • Patrick von Platen's avatar
      [Reformer] - Cache hidden states and buckets to speed up inference (#5578) · 9d37c56b
      Patrick von Platen authored
      * fix merge rebase
      
      * add intermediate reformer code
      
      * save intermediate caching results
      
      * save intermediate
      
      * save intermediate results
      
      * save intermediate
      
      * upload next step
      
      * fix generate tests
      
      * make tests work
      
      * add named tuple output
      
      * Apply suggestions from code review
      
      * fix use_cache for False case
      
      * fix tensor to gpu
      
      * fix tensor to gpu
      
      * refactor
      
      * refactor and make style
      9d37c56b
  16. 16 Jul, 2020 1 commit
  17. 15 Jul, 2020 3 commits
  18. 14 Jul, 2020 1 commit