Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
f1e2e423
Unverified
Commit
f1e2e423
authored
Jul 06, 2020
by
Sylvain Gugger
Committed by
GitHub
Jul 06, 2020
Browse files
Fix fast tokenizers too (#5562)
parent
5787e4c1
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
37 additions
and
19 deletions
+37
-19
src/transformers/tokenization_gpt2.py
src/transformers/tokenization_gpt2.py
+19
-10
src/transformers/tokenization_roberta.py
src/transformers/tokenization_roberta.py
+18
-9
No files found.
src/transformers/tokenization_gpt2.py
View file @
f1e2e423
...
@@ -297,21 +297,30 @@ class GPT2Tokenizer(PreTrainedTokenizer):
...
@@ -297,21 +297,30 @@ class GPT2Tokenizer(PreTrainedTokenizer):
class
GPT2TokenizerFast
(
PreTrainedTokenizerFast
):
class
GPT2TokenizerFast
(
PreTrainedTokenizerFast
):
"""
"""
Constructs a "Fast" GPT-2 BPE tokenizer (backed by HuggingFace's `tokenizers` library).
Constructs a "Fast" GPT-2 BPE tokenizer (backed by HuggingFace's `tokenizers` library), using byte-level
Byte-Pair-Encoding.
Peculiarities:
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
- Byte-level Byte-Pair-Encoding
- Requires a space to start the input string => the encoding methods should be called with the
``add_prefix_space`` flag set to ``True``.
Otherwise, this tokenizer ``encode`` and ``decode`` method will not conserve
the absence of a space at the beginning of a string:
::
::
tokenizer.decode(tokenizer.encode("Hello")) = " Hello"
>>> from transformers import GPT2TokenizerFast
>>> tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
>>> tokenizer("Hello world")['input_ids']
[15496, 995]
>>> tokenizer(" Hello world")['input_ids']
[18435, 995]
You can get around that behavior by passing ``add_prefix_space=True`` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
This tokenizer inherits from :class:`~transformers.PreTrainedTokenizerFast` which contains most of the methods. Users
.. note::
When used with ``is_pretokenized=True``, this tokenizer needs to be instantiated with
``add_prefix_space=True``.
This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the methods. Users
should refer to the superclass for more information regarding methods.
should refer to the superclass for more information regarding methods.
Args:
Args:
...
...
src/transformers/tokenization_roberta.py
View file @
f1e2e423
...
@@ -260,19 +260,28 @@ class RobertaTokenizer(GPT2Tokenizer):
...
@@ -260,19 +260,28 @@ class RobertaTokenizer(GPT2Tokenizer):
class
RobertaTokenizerFast
(
GPT2TokenizerFast
):
class
RobertaTokenizerFast
(
GPT2TokenizerFast
):
"""
"""
Constructs a "Fast" RoBERTa BPE tokenizer (backed by HuggingFace's `tokenizers` library).
Constructs a "Fast" RoBERTa BPE tokenizer (backed by HuggingFace's `tokenizers` library), derived from the GPT-2
tokenizer, using byte-level Byte-Pair-Encoding.
Peculiarities:
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
- Byte-level Byte-Pair-Encoding
- Requires a space to start the input string => the encoding methods should be called with the
``add_prefix_space`` flag set to ``True``.
Otherwise, this tokenizer ``encode`` and ``decode`` method will not conserve
the absence of a space at the beginning of a string:
::
::
tokenizer.decode(tokenizer.encode("Hello")) = " Hello"
>>> from transformers import RobertaTokenizerFast
>>> tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
>>> tokenizer("Hello world")['input_ids']
[0, 31414, 232, 328, 2]
>>> tokenizer(" Hello world")['input_ids']
[0, 20920, 232, 2]
You can get around that behavior by passing ``add_prefix_space=True`` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
.. note::
When used with ``is_pretokenized=True``, this tokenizer needs to be instantiated with
``add_prefix_space=True``.
This tokenizer inherits from :class:`~transformers.PreTrainedTokenizerFast` which contains most of the methods. Users
This tokenizer inherits from :class:`~transformers.PreTrainedTokenizerFast` which contains most of the methods. Users
should refer to the superclass for more information regarding methods.
should refer to the superclass for more information regarding methods.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment