- 27 Mar, 2021 1 commit
-
-
Leo Gao authored
-
- 26 Mar, 2021 3 commits
- 06 Mar, 2021 1 commit
-
-
Leo Gao authored
-
- 19 Feb, 2021 1 commit
-
-
Leo Gao authored
-
- 14 Feb, 2021 1 commit
-
-
Leo Gao authored
-
- 11 Feb, 2021 2 commits
- 10 Feb, 2021 1 commit
-
-
Leo Gao authored
-
- 08 Feb, 2021 1 commit
-
-
Leo Gao authored
-
- 05 Feb, 2021 1 commit
-
-
Leo Gao authored
-
- 01 Feb, 2021 1 commit
-
-
Leo Gao authored
-
- 28 Jan, 2021 1 commit
-
-
Leo Gao authored
-
- 22 Jan, 2021 2 commits
- 09 Jan, 2021 1 commit
-
-
Leo Gao authored
-
- 03 Jan, 2021 1 commit
-
-
Leo Gao authored
-
- 28 Dec, 2020 2 commits
- 30 Nov, 2020 2 commits
-
-
Leo Gao authored
-
Leo Gao authored
In particular, the following assumptions are FALSE in general: tokenize(context + continuation) = tokenize(context) + tokenize(continuation) len(tokenize(context + continuation)) = len(tokenize(context)) + len(tokenize(continuation)) tokenize(context + continuation)[:len(tokenize(context))] = tokenize(context) So we need to tip-toe around the problem by being careful with how we do it. In particular, using Fast is not just for performance; while behavour of GPT2Tokenizer differs across Transformers 2 and 3, GPT2TokenizerFast doesn't.
-
- 31 Oct, 2020 1 commit
-
-
Leo Gao authored
-
- 04 Oct, 2020 1 commit
-
-
Leo Gao authored
TODO: still need to add `until` everywhere
-
- 17 Sep, 2020 1 commit
-
-
Jason Phang authored
-
- 14 Sep, 2020 2 commits
-
-
Jason Phang authored
-
Jason Phang authored
-
- 07 Sep, 2020 5 commits
-
-
Jason Phang authored
-
Jason Phang authored
-
Jason Phang authored
-
Jason Phang authored
-
Jason Phang authored
-