Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
6dc41d9f
"vscode:/vscode.git/clone" did not exist on "bc4bbd9f6eb7a9281aa48bda737697435399b09f"
Unverified
Commit
6dc41d9f
authored
Sep 23, 2021
by
Suraj Patil
Committed by
GitHub
Sep 22, 2021
Browse files
add a note about tokenizer (#13696)
parent
7c7d2ec9
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
0 deletions
+5
-0
docs/source/model_doc/gptj.rst
docs/source/model_doc/gptj.rst
+5
-0
No files found.
docs/source/model_doc/gptj.rst
View file @
6dc41d9f
...
@@ -34,6 +34,11 @@ Tips:
...
@@ -34,6 +34,11 @@ Tips:
>>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16)
>>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16)
- Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra
tokens are added for the sake of efficiency on TPUs. To avoid the mis-match between embedding matrix size and vocab
size, the tokenizer for [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) contains 143 extra tokens
``<|extratoken_1|>... <|extratoken_143|>``, so the ``vocab_size`` of tokenizer also becomes 50400.
Generation
Generation
_______________________________________________________________________________________________________________________
_______________________________________________________________________________________________________________________
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment