1. 16 Sep, 2020 1 commit
  2. 15 Sep, 2020 1 commit
  3. 14 Sep, 2020 3 commits
  4. 11 Sep, 2020 2 commits
    • Sylvain Gugger's avatar
      Compute loss method (#7074) · 4cbd50e6
      Sylvain Gugger authored
      4cbd50e6
    • Sylvain Gugger's avatar
      Automate the lists in auto-xxx docs (#7061) · e841b75d
      Sylvain Gugger authored
      * More readable dict
      
      * More nlp -> datasets
      
      * Revert "More nlp -> datasets"
      
      This reverts commit 3cd1883d226c63c4a686fc1fed35f2cd586ebe45.
      
      * Automate the lists in auto-xxx docs
      
      * More readable dict
      
      * Revert "More nlp -> datasets"
      
      This reverts commit 3cd1883d226c63c4a686fc1fed35f2cd586ebe45.
      
      * Automate the lists in auto-xxx docs
      
      * nlp -> datasets
      
      * Fix new key
      e841b75d
  5. 10 Sep, 2020 5 commits
  6. 09 Sep, 2020 1 commit
  7. 08 Sep, 2020 2 commits
  8. 03 Sep, 2020 1 commit
  9. 02 Sep, 2020 2 commits
  10. 01 Sep, 2020 6 commits
  11. 27 Aug, 2020 1 commit
  12. 26 Aug, 2020 1 commit
  13. 25 Aug, 2020 1 commit
  14. 24 Aug, 2020 2 commits
  15. 21 Aug, 2020 4 commits
  16. 20 Aug, 2020 2 commits
    • Joe Davison's avatar
      add intro to nlp lib & dataset links to custom datasets tutorial (#6583) · 039d8d65
      Joe Davison authored
      * add intro to nlp lib + links
      
      * unique links...
      039d8d65
    • Romain Rigaux's avatar
      Docs copy button misses ... prefixed code (#6518) · cabfdfaf
      Romain Rigaux authored
      Tested in a local build of the docs.
      
      e.g. Just above https://huggingface.co/transformers/task_summary.html#causal-language-modeling
      
      Copy will copy the full code, e.g.
      
      for token in top_5_tokens:
           print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))
      
      Instead of currently only:
      
      for token in top_5_tokens:
      
      
      >>> for token in top_5_tokens:
      ...     print(sequence.replace(tokenizer.mask_token, tokenizer.decode([token])))
      Distilled models are smaller than the models they mimic. Using them instead of the large versions would help reduce our carbon footprint.
      Distilled models are smaller than the models they mimic. Using them instead of the large versions would help increase our carbon footprint.
      Distilled models are smaller than the models they mimic. Using them instead of the large versions would help decrease our carbon footprint.
      Distilled models are smaller than the models they mimic. Using them instead of the large versions would help offset our carbon footprint.
      Distilled models are smaller than the models they mimic. Using them instead of the large versions would help improve our carbon footprint.
      
      Docs for the option fix:
      https://sphinx-copybutton.readthedocs.io/en/latest/
      cabfdfaf
  17. 19 Aug, 2020 1 commit
  18. 18 Aug, 2020 4 commits