1. 27 Mar, 2021 1 commit
  2. 26 Mar, 2021 3 commits
  3. 06 Mar, 2021 1 commit
  4. 19 Feb, 2021 1 commit
  5. 14 Feb, 2021 1 commit
  6. 11 Feb, 2021 2 commits
  7. 10 Feb, 2021 1 commit
  8. 08 Feb, 2021 1 commit
  9. 05 Feb, 2021 1 commit
  10. 01 Feb, 2021 1 commit
  11. 28 Jan, 2021 1 commit
  12. 22 Jan, 2021 2 commits
  13. 09 Jan, 2021 1 commit
  14. 03 Jan, 2021 1 commit
  15. 28 Dec, 2020 2 commits
  16. 30 Nov, 2020 2 commits
    • Leo Gao's avatar
      Remove num_tokens · e3031e84
      Leo Gao authored
      e3031e84
    • Leo Gao's avatar
      Refactor to remove generate and fix some bad tokenization · 90e50b4c
      Leo Gao authored
      In particular, the following assumptions are FALSE in general:
      tokenize(context + continuation) = tokenize(context) + tokenize(continuation)
      len(tokenize(context + continuation)) = len(tokenize(context)) + len(tokenize(continuation))
      tokenize(context + continuation)[:len(tokenize(context))] = tokenize(context)
      
      So we need to tip-toe around the problem by being careful with how we do it.
      
      In particular, using Fast is not just for performance; while behavour of GPT2Tokenizer differs across Transformers 2 and 3, GPT2TokenizerFast doesn't.
      90e50b4c
  17. 31 Oct, 2020 1 commit
  18. 04 Oct, 2020 1 commit
  19. 17 Sep, 2020 1 commit
  20. 14 Sep, 2020 2 commits
  21. 07 Sep, 2020 5 commits