Unverified Commit 6dc41d9f authored by Suraj Patil's avatar Suraj Patil Committed by GitHub
Browse files

add a note about tokenizer (#13696)

parent 7c7d2ec9
...@@ -34,6 +34,11 @@ Tips: ...@@ -34,6 +34,11 @@ Tips:
>>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16) >>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16)
- Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra
tokens are added for the sake of efficiency on TPUs. To avoid the mis-match between embedding matrix size and vocab
size, the tokenizer for [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) contains 143 extra tokens
``<|extratoken_1|>... <|extratoken_143|>``, so the ``vocab_size`` of tokenizer also becomes 50400.
Generation Generation
_______________________________________________________________________________________________________________________ _______________________________________________________________________________________________________________________
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment