Unverified Commit f3065abd authored by Sylvain Gugger's avatar Sylvain Gugger Committed by GitHub
Browse files

Doc tokenizer (#6110)



* Start doc tokenizers

* Tokenizer documentation

* Start doc tokenizers

* Tokenizer documentation

* Formatting after rebase

* Formatting after merge

* Update docs/source/main_classes/tokenizer.rst
Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>

* Address comment

* Update src/transformers/tokenization_utils_base.py
Co-authored-by: default avatarThomas Wolf <thomwolf@users.noreply.github.com>

* Address Thom's comments
Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
Co-authored-by: default avatarThomas Wolf <thomwolf@users.noreply.github.com>
parent e642c789
...@@ -206,3 +206,4 @@ conversion utilities for the following models: ...@@ -206,3 +206,4 @@ conversion utilities for the following models:
model_doc/mobilebert model_doc/mobilebert
model_doc/dpr model_doc/dpr
internal/modeling_utils internal/modeling_utils
internal/tokenization_utils
Utilities for Tokenizers
------------------------
This page lists all the utility functions used by the tokenizers, mainly the class
:class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase` that implements the common methods between
:class:`~transformers.PreTrainedTokenizer` and :class:`~transformers.PreTrainedTokenizerFast` and the mixin
:class:`~transformers.tokenization_utils_base.SpecialTokensMixin`.
Most of those are only useful if you are studying the code of the tokenizers in the library.
``PreTrainedTokenizerBase``
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.tokenization_utils_base.PreTrainedTokenizerBase
:special-members: __call__
:members:
``SpecialTokensMixin``
~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.tokenization_utils_base.SpecialTokensMixin
:members:
Enums and namedtuples
~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.tokenization_utils_base.ExplicitEnum
.. autoclass:: transformers.tokenization_utils_base.PaddingStrategy
.. autoclass:: transformers.tokenization_utils_base.TensorType
.. autoclass:: transformers.tokenization_utils_base.TruncationStrategy
.. autoclass:: transformers.tokenization_utils_base.CharSpan
.. autoclass:: transformers.tokenization_utils_base.TokenSpan
Tokenizer Tokenizer
---------------------------------------------------- ----------------------------------------------------
A tokenizer is in charge of preparing the inputs for a model. The library comprise tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the Rust library `tokenizers`. The "Fast" implementations allows (1) a significant speed-up in particular when doing batched tokenization and (2) additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token). Currently no "Fast" implementation is available for the SentencePiece-based tokenizers (for T5, ALBERT, CamemBERT, XLMRoBERTa and XLNet models). A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most
of the tokenizers are available in two flavors: a full python implementation and a "Fast" implementation based on the
Rust library `tokenizers <https://github.com/huggingface/tokenizers>`__. The "Fast" implementations allows:
1. a significant speed-up in particular when doing batched tokenization and
2. additional methods to map between the original string (character and words) and the token space (e.g. getting the
index of the token comprising a given character or the span of characters corresponding to a given token). Currently
no "Fast" implementation is available for the SentencePiece-based tokenizers (for T5, ALBERT, CamemBERT, XLMRoBERTa
and XLNet models).
The base classes :class:`~transformers.PreTrainedTokenizer` and :class:`~transformers.PreTrainedTokenizerFast`
implement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and
"Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library
(downloaded from HuggingFace's AWS S3 repository). They both rely on
:class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase` that contains the common methods, and
:class:`~transformers.tokenization_utils_base.SpecialTokensMixin`.
:class:`~transformers.PreTrainedTokenizer` and :class:`~transformers.PreTrainedTokenizerFast` thus implement the main
methods for using all the tokenizers:
- Tokenizing (splitting strings in sub-word token strings), converting tokens strings to ids and back, and
encoding/decoding (i.e., tokenizing and converting to integers).
- Adding new tokens to the vocabulary in a way that is independent of the underlying structure (BPE, SentencePiece...).
- Managing special tokens (like mask, beginning-of-sentence, etc.): adding them, assigning them to attributes in the
tokenizer for easy access and making sure they are not split during tokenization.
:class:`~transformers.BatchEncoding` holds the output of the tokenizer's encoding methods (``__call__``,
``encode_plus`` and ``batch_encode_plus``) and is derived from a Python dictionary. When the tokenizer is a pure python
tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these
methods (``input_ids``, ``attention_mask``...). When the tokenizer is a "Fast" tokenizer (i.e., backed by HuggingFace
`tokenizers library <https://github.com/huggingface/tokenizers>`__), this class provides in addition several advanced
alignment methods which can be used to map between the original string (character and words) and the token space (e.g.,
getting the index of the token comprising a given character or the span of characters corresponding to a given token).
The base classes ``PreTrainedTokenizer`` and ``PreTrainedTokenizerFast`` implements the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and "Fast" tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace's AWS S3 repository).
``PreTrainedTokenizer`` and ``PreTrainedTokenizerFast`` thus implements the main methods for using all the tokenizers:
- tokenizing (spliting strings in sub-word token strings), converting tokens strings to ids and back, and encoding/decoding (i.e. tokenizing + convert to integers),
- adding new tokens to the vocabulary in a way that is independant of the underlying structure (BPE, SentencePiece...),
- managing special tokens like mask, beginning-of-sentence, etc tokens (adding them, assigning them to attributes in the tokenizer for easy access and making sure they are not split during tokenization)
``BatchEncoding`` holds the output of the tokenizer's encoding methods (``__call__``, ``encode_plus`` and ``batch_encode_plus``) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behave just like a standard python dictionary and hold the various model inputs computed by these methodes (``input_ids``, ``attention_mask``...). When the tokenizer is a "Fast" tokenizer (i.e. backed by HuggingFace tokenizers library), this class provides in addition several advanced alignement methods which can be used to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token).
``PreTrainedTokenizer`` ``PreTrainedTokenizer``
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
...@@ -20,6 +43,7 @@ The base classes ``PreTrainedTokenizer`` and ``PreTrainedTokenizerFast`` impleme ...@@ -20,6 +43,7 @@ The base classes ``PreTrainedTokenizer`` and ``PreTrainedTokenizerFast`` impleme
:special-members: __call__ :special-members: __call__
:members: :members:
``PreTrainedTokenizerFast`` ``PreTrainedTokenizerFast``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...@@ -27,14 +51,9 @@ The base classes ``PreTrainedTokenizer`` and ``PreTrainedTokenizerFast`` impleme ...@@ -27,14 +51,9 @@ The base classes ``PreTrainedTokenizer`` and ``PreTrainedTokenizerFast`` impleme
:special-members: __call__ :special-members: __call__
:members: :members:
``BatchEncoding`` ``BatchEncoding``
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.BatchEncoding .. autoclass:: transformers.BatchEncoding
:members: :members:
``SpecialTokensMixin``
~~~~~~~~~~~~~~~~~~~~~~~~
.. autoclass:: transformers.SpecialTokensMixin
:members:
...@@ -646,7 +646,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin): ...@@ -646,7 +646,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin):
resume_download (:obj:`bool`, `optional`, defaults to :obj:`False`): resume_download (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to delete incompletely received files. Will attempt to resume the download if such a Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists. file exists.
proxies: (:obj:`Dict[str, str], `optional`): proxies (:obj:`Dict[str, str], `optional`):
A dictionary of proxy servers to use by protocol or endpoint, e.g., A dictionary of proxy servers to use by protocol or endpoint, e.g.,
:obj:`{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each :obj:`{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each
request. request.
......
...@@ -20,12 +20,13 @@ import itertools ...@@ -20,12 +20,13 @@ import itertools
import logging import logging
import re import re
import unicodedata import unicodedata
from typing import Dict, List, Optional, Tuple, Union from typing import Any, Dict, List, Optional, Tuple, Union
from .file_utils import add_end_docstrings from .file_utils import add_end_docstrings
from .tokenization_utils_base import ( from .tokenization_utils_base import (
ENCODE_KWARGS_DOCSTRING, ENCODE_KWARGS_DOCSTRING,
ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING,
INIT_TOKENIZER_DOCSTRING,
AddedToken, AddedToken,
BatchEncoding, BatchEncoding,
EncodedInput, EncodedInput,
...@@ -45,7 +46,7 @@ logger = logging.getLogger(__name__) ...@@ -45,7 +46,7 @@ logger = logging.getLogger(__name__)
def _is_whitespace(char): def _is_whitespace(char):
"""Checks whether `chars` is a whitespace character.""" """Checks whether `char` is a whitespace character."""
# \t, \n, and \r are technically contorl characters but we treat them # \t, \n, and \r are technically contorl characters but we treat them
# as whitespace since they are generally considered as such. # as whitespace since they are generally considered as such.
if char == " " or char == "\t" or char == "\n" or char == "\r": if char == " " or char == "\t" or char == "\n" or char == "\r":
...@@ -57,7 +58,7 @@ def _is_whitespace(char): ...@@ -57,7 +58,7 @@ def _is_whitespace(char):
def _is_control(char): def _is_control(char):
"""Checks whether `chars` is a control character.""" """Checks whether `char` is a control character."""
# These are technically control characters but we count them as whitespace # These are technically control characters but we count them as whitespace
# characters. # characters.
if char == "\t" or char == "\n" or char == "\r": if char == "\t" or char == "\n" or char == "\r":
...@@ -69,7 +70,7 @@ def _is_control(char): ...@@ -69,7 +70,7 @@ def _is_control(char):
def _is_punctuation(char): def _is_punctuation(char):
"""Checks whether `chars` is a punctuation character.""" """Checks whether `char` is a punctuation character."""
cp = ord(char) cp = ord(char)
# We treat all non-letter/number ASCII as punctuation. # We treat all non-letter/number ASCII as punctuation.
# Characters such as "^", "$", and "`" are not in the Unicode # Characters such as "^", "$", and "`" are not in the Unicode
...@@ -95,8 +96,12 @@ def _is_start_of_word(text): ...@@ -95,8 +96,12 @@ def _is_start_of_word(text):
return bool(_is_control(first_char) | _is_punctuation(first_char) | _is_whitespace(first_char)) return bool(_is_control(first_char) | _is_punctuation(first_char) | _is_whitespace(first_char))
@add_end_docstrings(INIT_TOKENIZER_DOCSTRING, """ .. automethod:: __call__""")
class PreTrainedTokenizer(PreTrainedTokenizerBase): class PreTrainedTokenizer(PreTrainedTokenizerBase):
""" Base class for all slow tokenizers. """
Base class for all slow tokenizers.
Inherits from :class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase`.
Handle all the shared methods for tokenization and special tokens as well as methods Handle all the shared methods for tokenization and special tokens as well as methods
downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary. downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.
...@@ -104,53 +109,6 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -104,53 +109,6 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
This class also contain the added tokens in a unified way on top of all tokenizers so we don't This class also contain the added tokens in a unified way on top of all tokenizers so we don't
have to handle the specific vocabulary augmentation methods of the various underlying have to handle the specific vocabulary augmentation methods of the various underlying
dictionary structures (BPE, sentencepiece...). dictionary structures (BPE, sentencepiece...).
Class attributes (overridden by derived classes):
- ``vocab_files_names``: a python ``dict`` with, as keys, the ``__init__`` keyword name of each vocabulary file
required by the model, and as associated values, the filename for saving the associated file (string).
- ``pretrained_vocab_files_map``: a python ``dict of dict`` the high-level keys
being the ``__init__`` keyword name of each vocabulary file required by the model, the low-level being the
`short-cut-names` (string) of the pretrained models with, as associated values, the `url` (string) to the
associated pretrained vocabulary file.
- ``max_model_input_sizes``: a python ``dict`` with, as keys, the `short-cut-names` (string) of the pretrained
models, and as associated values, the maximum length of the sequence inputs of this model, or None if the
model has no maximum input size.
- ``pretrained_init_configuration``: a python ``dict`` with, as keys, the `short-cut-names` (string) of the
pretrained models, and as associated values, a dictionnary of specific arguments to pass to the
``__init__``method of the tokenizer class for this pretrained model when loading the tokenizer with the
``from_pretrained()`` method.
Args:
- ``model_max_length``: (`Optional`) int: the maximum length in number of tokens for the inputs to the transformer model.
When the tokenizer is loaded with `from_pretrained`, this will be set to the value stored for the associated
model in ``max_model_input_sizes`` (see above). If no value is provided, will default to VERY_LARGE_INTEGER (`int(1e30)`).
no associated max_length can be found in ``max_model_input_sizes``.
- ``padding_side``: (`Optional`) string: the side on which the model should have padding applied.
Should be selected between ['right', 'left']
- ``model_input_names``: (`Optional`) List[string]: the list of the forward pass inputs accepted by the
model ("token_type_ids", "attention_mask"...).
- ``bos_token``: (`Optional`) string: a beginning of sentence token.
Will be associated to ``self.bos_token`` and ``self.bos_token_id``
- ``eos_token``: (`Optional`) string: an end of sentence token.
Will be associated to ``self.eos_token`` and ``self.eos_token_id``
- ``unk_token``: (`Optional`) string: an unknown token.
Will be associated to ``self.unk_token`` and ``self.unk_token_id``
- ``sep_token``: (`Optional`) string: a separation token (e.g. to separate context and query in an input sequence).
Will be associated to ``self.sep_token`` and ``self.sep_token_id``
- ``pad_token``: (`Optional`) string: a padding token.
Will be associated to ``self.pad_token`` and ``self.pad_token_id``
- ``cls_token``: (`Optional`) string: a classification token (e.g. to extract a summary of an input sequence
leveraging self-attention along the full depth of the model).
Will be associated to ``self.cls_token`` and ``self.cls_token_id``
- ``mask_token``: (`Optional`) string: a masking token (e.g. when training a model with masked-language
modeling). Will be associated to ``self.mask_token`` and ``self.mask_token_id``
- ``additional_special_tokens``: (`Optional`) list: a list of additional special tokens.
Adding all special tokens here ensure they won't be split by the tokenization process.
Will be associated to ``self.additional_special_tokens`` and ``self.additional_special_tokens_ids``
.. automethod:: __call__
""" """
def __init__(self, **kwargs): def __init__(self, **kwargs):
...@@ -168,31 +126,52 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -168,31 +126,52 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
@property @property
def vocab_size(self) -> int: def vocab_size(self) -> int:
""" Size of the base vocabulary (without the added tokens) """ """
:obj:`int`: Size of the base vocabulary (without the added tokens).
"""
raise NotImplementedError raise NotImplementedError
def get_vocab(self): def get_vocab(self) -> Dict[str, int]:
""" Returns the vocabulary as a dict of {token: index} pairs. `tokenizer.get_vocab()[token]` is equivalent to `tokenizer.convert_tokens_to_ids(token)` when `token` is in the vocab. """ """
Returns the vocabulary as a dictionary of token to index.
:obj:`tokenizer.get_vocab()[token]` is equivalent to :obj:`tokenizer.convert_tokens_to_ids(token)` when
:obj:`token` is in the vocab.
Returns:
:obj:`Dict[str, int]`: The vocabulary.
"""
raise NotImplementedError() raise NotImplementedError()
def get_added_vocab(self) -> Dict[str, int]: def get_added_vocab(self) -> Dict[str, int]:
"""
Returns the added tokens in the vocabulary as a dictionary of token to index.
Returns:
:obj:`Dict[str, int]`: The added tokens.
"""
return self.added_tokens_encoder return self.added_tokens_encoder
def __len__(self): def __len__(self):
""" Size of the full vocabulary with the added tokens """ """
Size of the full vocabulary with the added tokens.
"""
return self.vocab_size + len(self.added_tokens_encoder) return self.vocab_size + len(self.added_tokens_encoder)
def _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens=False) -> int: def _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens: bool = False) -> int:
""" """
Add a list of new tokens to the tokenizer class. If the new tokens are not in the Add a list of new tokens to the tokenizer class. If the new tokens are not in the
vocabulary, they are added to it with indices starting from length of the current vocabulary. vocabulary, they are added to it with indices starting from length of the current vocabulary.
Args: Args:
new_tokens: string or list of string. Each string is a token to add. Tokens are only added if they are not new_tokens (:obj:`List[str]`or :obj:`List[tokenizers.AddedToken]`):
already in the vocabulary (tested by checking if the tokenizer assign the index of the ``unk_token`` to them). Token(s) to add in vocabulary. A token is only added if it's not already in the vocabulary (tested by
checking if the tokenizer assign the index of the ``unk_token`` to them).
special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not the tokens should be added as special tokens.
Returns: Returns:
Number of tokens added to the vocabulary. :obj:`int`: The number of tokens actually added to the vocabulary.
Examples:: Examples::
...@@ -202,7 +181,8 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -202,7 +181,8 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2']) num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
print('We have added', num_added_toks, 'tokens') print('We have added', num_added_toks, 'tokens')
model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer. # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
""" """
new_tokens = [str(tok) for tok in new_tokens] new_tokens = [str(tok) for tok in new_tokens]
...@@ -234,35 +214,41 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -234,35 +214,41 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
return len(tokens_to_add) return len(tokens_to_add)
def num_special_tokens_to_add(self, pair=False): def num_special_tokens_to_add(self, pair: bool = False) -> int:
""" """
Returns the number of added tokens when encoding a sequence with special tokens. Returns the number of added tokens when encoding a sequence with special tokens.
Note: .. note::
This encodes inputs and checks the number of added tokens, and is therefore not efficient. Do not put this This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not
inside your training loop. put this inside your training loop.
Args: Args:
pair: Returns the number of added tokens in the case of a sequence pair if set to True, returns the pair (:obj:`bool`, `optional`, defaults to :obj:`False`):
number of added tokens in the case of a single sequence if set to False. Whether the number of added tokens should be computed in the case of a sequence pair or a single
sequence.
Returns: Returns:
Number of tokens added to sequences :obj:`int`: Number of special tokens added to sequences.
""" """
token_ids_0 = [] token_ids_0 = []
token_ids_1 = [] token_ids_1 = []
return len(self.build_inputs_with_special_tokens(token_ids_0, token_ids_1 if pair else None)) return len(self.build_inputs_with_special_tokens(token_ids_0, token_ids_1 if pair else None))
def tokenize(self, text: TextInput, **kwargs): def tokenize(self, text: TextInput, **kwargs) -> List[str]:
""" Converts a string in a sequence of tokens (string), using the tokenizer. """
Split in words for word-based vocabulary or sub-words for sub-word-based Converts a string in a sequence of tokens, using the tokenizer.
vocabularies (BPE/SentencePieces/WordPieces).
Take care of added tokens. Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces).
Takes care of added tokens.
Args: Args:
text (:obj:`string`): The sequence to be encoded. text (:obj:`str`):
**kwargs (:obj: `dict`): Arguments passed to the model-specific `prepare_for_tokenization` preprocessing method. The sequence to be encoded.
**kwargs (additional keyword arguments):
Passed along to the model-specific ``prepare_for_tokenization`` preprocessing method.
Returns:
:obj:`List[str]`: The list of tokens.
""" """
# Simple mapping string => AddedToken for special tokens with specific tokenization behaviors # Simple mapping string => AddedToken for special tokens with specific tokenization behaviors
all_special_tokens_extended = dict( all_special_tokens_extended = dict(
...@@ -365,17 +351,25 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -365,17 +351,25 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
return tokenized_text return tokenized_text
def _tokenize(self, text, **kwargs): def _tokenize(self, text, **kwargs):
""" Converts a string in a sequence of tokens (string), using the tokenizer. """
Split in words for word-based vocabulary or sub-words for sub-word-based Converts a string in a sequence of tokens (string), using the tokenizer.
vocabularies (BPE/SentencePieces/WordPieces). Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies
(BPE/SentencePieces/WordPieces).
Do NOT take care of added tokens. Do NOT take care of added tokens.
""" """
raise NotImplementedError raise NotImplementedError
def convert_tokens_to_ids(self, tokens): def convert_tokens_to_ids(self, tokens: Union[str, List[str]]) -> Union[int, List[int]]:
""" Converts a token string (or a sequence of tokens) in a single integer id """
(or a sequence of ids), using the vocabulary. Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the
vocabulary.
Args:
token (:obj:`str` or :obj:`List[str]`): One or several token(s) to convert to token id(s).
Returns:
:obj:`int` or :obj:`List[int]`: The token id or list of token ids.
""" """
if tokens is None: if tokens is None:
return None return None
...@@ -574,7 +568,8 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -574,7 +568,8 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
return_length: bool = False, return_length: bool = False,
verbose: bool = True, verbose: bool = True,
) -> BatchEncoding: ) -> BatchEncoding:
""" Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. """
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model.
It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens manages a moving window (with user defined stride) for overflowing tokens
...@@ -620,11 +615,25 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -620,11 +615,25 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
return batch_outputs return batch_outputs
def prepare_for_tokenization(self, text: str, is_pretokenized=False, **kwargs) -> (str, dict): def prepare_for_tokenization(
""" Performs any necessary transformations before tokenization. self, text: str, is_pretokenized: bool = False, **kwargs
) -> Tuple[str, Dict[str, Any]]:
"""
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining :obj:`kwargs` as well.
We test the :obj:`kwargs` at the end of the encoding process to be sure all the arguments have been used.
This method should pop the arguments from kwargs and return kwargs as well. Args:
We test kwargs at the end of the encoding process to be sure all the arguments have been used. test (:obj:`str`):
The text to prepare.
is_pretokenized (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not the text has been pretokenized.
kwargs:
Keyword arguments to use for the tokenization.
Returns:
:obj:`Tuple[str, Dict[str, Any]]`: The prepared text and the unused kwargs.
""" """
return (text, kwargs) return (text, kwargs)
...@@ -633,14 +642,15 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -633,14 +642,15 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
) -> List[int]: ) -> List[int]:
""" """
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer ``prepare_for_model`` method. special tokens using the tokenizer ``prepare_for_model`` or ``encode_plus`` methods.
Args: Args:
token_ids_0: list of ids (must not contain special tokens) token_ids_0 (:obj:`List[int]`):
token_ids_1: Optional list of ids (must not contain special tokens), necessary when fetching sequence ids List of ids of the first sequence.
for sequence pairs token_ids_1 (:obj:`List[int]`, `optional`):
already_has_special_tokens: (default False) Set to True if the token list is already formated with List of ids of the second sequence.
special tokens for the model already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Wheter or not the token list is already formated with special tokens for the model.
Returns: Returns:
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
...@@ -650,11 +660,18 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -650,11 +660,18 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
def convert_ids_to_tokens( def convert_ids_to_tokens(
self, ids: Union[int, List[int]], skip_special_tokens: bool = False self, ids: Union[int, List[int]], skip_special_tokens: bool = False
) -> Union[str, List[str]]: ) -> Union[str, List[str]]:
""" Converts a single index or a sequence of indices (integers) in a token " """
(resp.) a sequence of tokens (str), using the vocabulary and added tokens. Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary
and added tokens.
Args: Args:
skip_special_tokens: Don't decode special tokens (self.all_special_tokens). Default: False ids (:obj:`int` or :obj:`List[int]`):
The token id (or token ids) to convert to tokens.
skip_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to remove special tokens in the decoding.
Returns:
:obj:`str` or :obj:`List[str]`: The decoded token(s).
""" """
if isinstance(ids, int): if isinstance(ids, int):
if ids in self.added_tokens_decoder: if ids in self.added_tokens_decoder:
...@@ -676,15 +693,39 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -676,15 +693,39 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
raise NotImplementedError raise NotImplementedError
def convert_tokens_to_string(self, tokens: List[str]) -> str: def convert_tokens_to_string(self, tokens: List[str]) -> str:
""" Converts a sequence of tokens (string) in a single string.
The most simple way to do it is ' '.join(self.convert_ids_to_tokens(token_ids))
but we often want to remove sub-word tokenization artifacts at the same time.
""" """
return " ".join(self.convert_ids_to_tokens(tokens)) Converts a sequence of token ids in a single string.
The most simple way to do it is ``" ".join(tokens)`` but we often want to remove
sub-word tokenization artifacts at the same time.
Args:
tokens (:obj:`List[str]`): The token to join in a string.
Return: The joined tokens.
"""
return " ".join(tokens)
def decode( def decode(
self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True
) -> str: ) -> str:
"""
Converts a sequence of ids in a string, using the tokenizer and vocabulary
with options to remove special tokens and clean up tokenization spaces.
Similar to doing ``self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))``.
Args:
token_ids (:obj:`List[int]`):
List of tokenized input ids. Can be obtained using the ``__call__`` method.
skip_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to clean up the tokenization spaces.
Returns:
:obj:`str`: The decoded sentence.
"""
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens) filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
# To avoid mixing byte-level and unicode for byte-level BPT # To avoid mixing byte-level and unicode for byte-level BPT
...@@ -713,11 +754,18 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase): ...@@ -713,11 +754,18 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
return text return text
def save_vocabulary(self, save_directory) -> Tuple[str]: def save_vocabulary(self, save_directory) -> Tuple[str]:
""" Save the tokenizer vocabulary to a directory. This method does *NOT* save added tokens """
Save the tokenizer vocabulary to a directory. This method does *NOT* save added tokens
and special token mappings. and special token mappings.
Please use :func:`~transformers.PreTrainedTokenizer.save_pretrained` `()` to save the full .. warning::
Tokenizer state if you want to reload it using the :func:`~transformers.PreTrainedTokenizer.from_pretrained` Please use :meth:`~transformers.PreTrainedTokenizer.save_pretrained` to save the full tokenizer state if
class method. you want to reload it using the :meth:`~transformers.PreTrainedTokenizer.from_pretrained` class method.
Args:
save_directory (:obj:`str`): The path to adirectory where the tokenizer will be saved.
Returns:
A tuple of :obj:`str`: The files saved.
""" """
raise NotImplementedError raise NotImplementedError
...@@ -72,7 +72,8 @@ FULL_TOKENIZER_FILE = "tokenizer.json" ...@@ -72,7 +72,8 @@ FULL_TOKENIZER_FILE = "tokenizer.json"
class ExplicitEnum(Enum): class ExplicitEnum(Enum):
""" Enum with more explicit error message for missing values. """
Enum with more explicit error message for missing values.
""" """
@classmethod @classmethod
...@@ -84,6 +85,11 @@ class ExplicitEnum(Enum): ...@@ -84,6 +85,11 @@ class ExplicitEnum(Enum):
class TruncationStrategy(ExplicitEnum): class TruncationStrategy(ExplicitEnum):
"""
Possible values for the ``truncation`` argument in :meth:`PreTrainedTokenizerBase.__call__`.
Useful for tab-completion in an IDE.
"""
ONLY_FIRST = "only_first" ONLY_FIRST = "only_first"
ONLY_SECOND = "only_second" ONLY_SECOND = "only_second"
LONGEST_FIRST = "longest_first" LONGEST_FIRST = "longest_first"
...@@ -91,23 +97,34 @@ class TruncationStrategy(ExplicitEnum): ...@@ -91,23 +97,34 @@ class TruncationStrategy(ExplicitEnum):
class PaddingStrategy(ExplicitEnum): class PaddingStrategy(ExplicitEnum):
"""
Possible values for the ``padding`` argument in :meth:`PreTrainedTokenizerBase.__call__`.
Useful for tab-completion in an IDE.
"""
LONGEST = "longest" LONGEST = "longest"
MAX_LENGTH = "max_length" MAX_LENGTH = "max_length"
DO_NOT_PAD = "do_not_pad" DO_NOT_PAD = "do_not_pad"
class TensorType(ExplicitEnum): class TensorType(ExplicitEnum):
"""
Possible values for the ``return_tensors`` argument in :meth:`PreTrainedTokenizerBase.__call__`.
Useful for tab-completion in an IDE.
"""
PYTORCH = "pt" PYTORCH = "pt"
TENSORFLOW = "tf" TENSORFLOW = "tf"
NUMPY = "np" NUMPY = "np"
class CharSpan(NamedTuple): class CharSpan(NamedTuple):
""" Character span in the original string """
Character span in the original string.
Args: Args:
start: index of the first character in the original string start (:obj:`int`): Index of the first character in the original string.
end: index of the character following the last character in the original string end (:obj:`int`): Index of the character following the last character in the original string.
""" """
start: int start: int
...@@ -115,11 +132,12 @@ class CharSpan(NamedTuple): ...@@ -115,11 +132,12 @@ class CharSpan(NamedTuple):
class TokenSpan(NamedTuple): class TokenSpan(NamedTuple):
""" Token span in an encoded string (list of tokens) """
Token span in an encoded string (list of tokens).
Args: Args:
start: index of the first token in the span start (:obj:`int`): Index of the first token in the span.
end: index of the token following the last token in the span end (:obj:`int`): Index of the token following the last token in the span.
""" """
start: int start: int
...@@ -127,19 +145,27 @@ class TokenSpan(NamedTuple): ...@@ -127,19 +145,27 @@ class TokenSpan(NamedTuple):
class BatchEncoding(UserDict): class BatchEncoding(UserDict):
""" BatchEncoding hold the output of the encode and batch_encode methods (tokens, attention_masks, etc). """
This class is derived from a python Dictionary and can be used as a dictionnary. Holds the output of the :meth:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode_plus`
In addition, this class expose utility methods to map from word/char space to token space. and :meth:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase.batch_encode` methods (tokens,
attention_masks, etc).
This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes
utility methods to map from word/character space to token space.
Args: Args:
data (:obj:`dict`): Dictionary of lists/arrays returned by the encode/batch_encode methods ('input_ids', 'attention_mask'...) data (:obj:`dict`):
encoding (:obj:`EncodingFast`, :obj:`list(EncodingFast)`, `optional`, defaults to :obj:`None`): Dictionary of lists/arrays/tensors returned by the encode/batch_encode methods ('input_ids',
If the tokenizer is a fast tokenizer which outputs additional informations like mapping from word/char space to token space 'attention_mask', etc.).
the `EncodingFast` instance or list of instance (for batches) hold these informations. encoding (:obj:`tokenizers.Encoding` or :obj:`Sequence[tokenizers.Encoding]`, `optional`):
tensor_type (:obj:`Union[None, str, TensorType]`, `optional`, defaults to :obj:`None`): If the tokenizer is a fast tokenizer which outputs additional informations like mapping from word/character
You can give a tensor_type here to convert the lists of integers in PyTorch/TF/Numpy Tensors at initialization space to token space the :obj:`tokenizers.Encoding` instance or list of instance (for batches) hold these
informations.
tensor_type (:obj:`Union[None, str, TensorType]`, `optional`):
You can give a tensor_type here to convert the lists of integers in PyTorch/TensorFlow/Numpy Tensors at
initialization.
prepend_batch_axis (:obj:`bool`, `optional`, defaults to :obj:`False`): prepend_batch_axis (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True to add a batch axis when converting in Tensors (see :obj:`tensor_type` above) Whether or not to add a batch axis when converting to tensors (see :obj:`tensor_type` above).
""" """
def __init__( def __init__(
...@@ -159,16 +185,19 @@ class BatchEncoding(UserDict): ...@@ -159,16 +185,19 @@ class BatchEncoding(UserDict):
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
@property @property
def is_fast(self): def is_fast(self) -> bool:
""" """
Indicate if this BatchEncoding was generated from the result of a PreTrainedTokenizerFast :obj:`bool`: Indicate whether this :class:`~transformers.BatchEncoding` was generated from the result of a
Returns: True if generated from subclasses of PreTrainedTokenizerFast, else otherwise :class:`~transformers.PreTrainedTokenizerFast` or not.
""" """
return self._encodings is not None return self._encodings is not None
def __getitem__(self, item: Union[int, str]) -> EncodingFast: def __getitem__(self, item: Union[int, str]) -> Union[Any, EncodingFast]:
""" If the key is a string, get the value of the dict associated to `key` ('input_ids', 'attention_mask'...) """
If the key is an integer, get the EncodingFast for batch item with index `key` If the key is a string, returns the value of the dict associated to :obj:`key` ('input_ids',
'attention_mask', etc.).
If the key is an integer, get the :obj:`tokenizers.Encoding` for batch item with index :obj:`key`.
""" """
if isinstance(item, str): if isinstance(item, str):
return self.data[item] return self.data[item]
...@@ -212,20 +241,40 @@ class BatchEncoding(UserDict): ...@@ -212,20 +241,40 @@ class BatchEncoding(UserDict):
@property @property
def encodings(self) -> Optional[List[EncodingFast]]: def encodings(self) -> Optional[List[EncodingFast]]:
""" """
Return the list all encoding from the tokenization process :obj:`Optional[List[tokenizers.Encoding]]`: The list all encodings from the tokenization process.
Returns :obj:`None` if the input was tokenized through Python (i.e., not a fast) tokenizer.
Returns: List[EncodingFast] or None if input was tokenized through Python (i.e. not fast) tokenizer
""" """
return self._encodings return self._encodings
def tokens(self, batch_index: int = 0) -> List[str]: def tokens(self, batch_index: int = 0) -> List[str]:
"""
Return the list of tokens (sub-parts of the input strings after word/subword splitting and before converstion
to integer indices) at a given batch index (only works for the output of a fast tokenizer).
Args:
batch_index (:obj:`int`, `optional`, defaults to 0): The index to access in the batch.
Returns:
:obj:`List[str]`: The list of tokens at that index.
"""
if not self._encodings: if not self._encodings:
raise ValueError("tokens() is not available when using Python based tokenizers") raise ValueError("tokens() is not available when using Python-based tokenizers")
return self._encodings[batch_index].tokens return self._encodings[batch_index].tokens
def words(self, batch_index: int = 0) -> List[Optional[int]]: def words(self, batch_index: int = 0) -> List[Optional[int]]:
"""
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
Args:
batch_index (:obj:`int`, `optional`, defaults to 0): The index to access in the batch.
Returns:
:obj:`List[Optional[int]]`: A list indicating the word corresponding to each token. Special tokens added by
the tokenizer are mapped to :obj:`None` and other tokens are mapped to the index of their corresponding
word (several tokens will be mapped to the same word index if they are parts of that word).
"""
if not self._encodings: if not self._encodings:
raise ValueError("words() is not available when using Python based tokenizers") raise ValueError("words() is not available when using Python-based tokenizers")
return self._encodings[batch_index].words return self._encodings[batch_index].words
def token_to_word(self, batch_or_token_index: int, token_index: Optional[int] = None) -> int: def token_to_word(self, batch_or_token_index: int, token_index: Optional[int] = None) -> int:
...@@ -239,21 +288,19 @@ class BatchEncoding(UserDict): ...@@ -239,21 +288,19 @@ class BatchEncoding(UserDict):
- ``self.token_to_word(batch_index, token_index)`` if batch size is greater than 1 - ``self.token_to_word(batch_index, token_index)`` if batch size is greater than 1
This method is particularly suited when the input sequences are provided as This method is particularly suited when the input sequences are provided as
pre-tokenized sequences (i.e. words are defined by the user). In this case it allows pre-tokenized sequences (i.e., words are defined by the user). In this case it allows
to easily associate encoded tokens with provided tokenized words. to easily associate encoded tokens with provided tokenized words.
Args: Args:
batch_or_token_index (:obj:`int`): batch_or_token_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, Index of the sequence in the batch. If the batch only comprise one sequence,
this can be the index of the token in the sequence this can be the index of the token in the sequence.
token_index (:obj:`int`, `optional`): token_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index If a batch index is provided in `batch_or_token_index`, this can be the index
of the token in the sequence. of the token in the sequence.
Returns: Returns:
:obj:`int`: :obj:`int`: Index of the word in the input sequence.
index of the word in the input sequence.
""" """
if not self._encodings: if not self._encodings:
...@@ -273,10 +320,10 @@ class BatchEncoding(UserDict): ...@@ -273,10 +320,10 @@ class BatchEncoding(UserDict):
""" """
Get the encoded token span corresponding to a word in the sequence of the batch. Get the encoded token span corresponding to a word in the sequence of the batch.
Token spans are returned as a TokenSpan NamedTuple with: Token spans are returned as a :class:`~transformers.tokenization_utils_base.TokenSpan` with:
- start: index of the first token - **start** -- Index of the first token.
- end: index of the token following the last token - **end** -- Index of the token following the last token.
Can be called as: Can be called as:
...@@ -290,19 +337,14 @@ class BatchEncoding(UserDict): ...@@ -290,19 +337,14 @@ class BatchEncoding(UserDict):
Args: Args:
batch_or_word_index (:obj:`int`): batch_or_word_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprises one sequence, Index of the sequence in the batch. If the batch only comprises one sequence,
this can be the index of the word in the sequence this can be the index of the word in the sequence.
word_index (:obj:`int`, `optional`): word_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index If a batch index is provided in `batch_or_token_index`, this can be the index
of the word in the sequence. of the word in the sequence.
Returns: Returns:
:obj:`TokenSpan`: :class:`~transformers.tokenization_utils_base.TokenSpan`
Span of tokens in the encoded sequence. Span of tokens in the encoded sequence.
:obj:`TokenSpan` are NamedTuple with:
- start: index of the first token
- end: index of the token following the last token
""" """
if not self._encodings: if not self._encodings:
...@@ -322,10 +364,11 @@ class BatchEncoding(UserDict): ...@@ -322,10 +364,11 @@ class BatchEncoding(UserDict):
""" """
Get the character span corresponding to an encoded token in a sequence of the batch. Get the character span corresponding to an encoded token in a sequence of the batch.
Character spans are returned as a CharSpan NamedTuple with: Character spans are returned as a :class:`~transformers.tokenization_utils_base.CharSpan` with:
- start: index of the first character in the original string associated to the token - **start** -- Index of the first character in the original string associated to the token.
- end: index of the character following the last character in the original string associated to the token - **end** -- Index of the character following the last character in the original string associated to the
token.
Can be called as: Can be called as:
...@@ -335,19 +378,14 @@ class BatchEncoding(UserDict): ...@@ -335,19 +378,14 @@ class BatchEncoding(UserDict):
Args: Args:
batch_or_token_index (:obj:`int`): batch_or_token_index (:obj:`int`):
Index of the sequence in the batch. If the batch only comprise one sequence, Index of the sequence in the batch. If the batch only comprise one sequence,
this can be the index of the token in the sequence this can be the index of the token in the sequence.
token_index (:obj:`int`, `optional`): token_index (:obj:`int`, `optional`):
If a batch index is provided in `batch_or_token_index`, this can be the index If a batch index is provided in `batch_or_token_index`, this can be the index
of the token or tokens in the sequence. of the token or tokens in the sequence.
Returns: Returns:
:obj:`CharSpan`: :class:`~transformers.tokenization_utils_base.CharSpan`:
Span of characters in the original string. Span of characters in the original string.
:obj:`CharSpan` are NamedTuple with:
- start: index of the first character in the original string
- end: index of the character following the last character in the original string
""" """
if not self._encodings: if not self._encodings:
...@@ -473,7 +511,19 @@ class BatchEncoding(UserDict): ...@@ -473,7 +511,19 @@ class BatchEncoding(UserDict):
char_index = batch_or_char_index char_index = batch_or_char_index
return self._encodings[batch_index].char_to_word(char_index) return self._encodings[batch_index].char_to_word(char_index)
def convert_to_tensors(self, tensor_type: Union[None, str, TensorType], prepend_batch_axis: bool = False): def convert_to_tensors(
self, tensor_type: Optional[Union[str, TensorType]] = None, prepend_batch_axis: bool = False
):
"""
Convert the inner content to tensors.
Args:
tensor_type (:obj:`str` or :class:`~transformers.tokenization_utils_base.TensorType`, `optional`):
The type of tensors to use. If :obj:`str`, should be one of the values of the enum
:class:`~transformers.tokenization_utils_base.TensorType`. If :obj:`None`, no modification is done.
prepend_batch_axis (:obj:`int`, `optional`, defaults to :obj:`False`):
Whether or not to add the batch dimension during the conversion.
"""
if tensor_type is None: if tensor_type is None:
return self return self
...@@ -524,8 +574,17 @@ class BatchEncoding(UserDict): ...@@ -524,8 +574,17 @@ class BatchEncoding(UserDict):
return self return self
@torch_required @torch_required
def to(self, device: str): def to(self, device: str) -> "BatchEncoding":
"""Send all values to device by calling v.to(device)""" """
Send all values to device by calling :obj:`v.to(device)` (PyTorch only).
Args:
device (:obj:`str` or :obj:`torch.device`): The device to put the tensors on.
Returns:
:class:`~transformers.BatchEncoding`:
The same instance of :class:`~transformers.BatchEncoding` after modification.
"""
self.data = {k: v.to(device) for k, v in self.data.items()} self.data = {k: v.to(device) for k, v in self.data.items()}
return self return self
...@@ -568,10 +627,31 @@ class BatchEncoding(UserDict): ...@@ -568,10 +627,31 @@ class BatchEncoding(UserDict):
class SpecialTokensMixin: class SpecialTokensMixin:
""" SpecialTokensMixin is derived by ``PreTrainedTokenizer`` and ``PreTrainedTokenizerFast`` and """
handles specific behaviors related to special tokens. In particular, this class hold the A mixin derived by :class:`~transformers.PreTrainedTokenizer` and :class:`~transformers.PreTrainedTokenizerFast`
attributes which can be used to directly access to these special tokens in a to handle specific behaviors related to special tokens. In particular, this class hold the attributes which can be
model-independant manner and allow to set and update the special tokens. used to directly access these special tokens in a model-independant manner and allow to set and update the special
tokens.
Args:
bos_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing the beginning of a sentence.
eos_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing the end of a sentence.
unk_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing an out-of-vocabulary token.
sep_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token separating two different sentences in the same input (used by BERT for instance).
pad_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
attention mechanisms or loss computation.
cls_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing the class of the input (used by BERT for instance).
mask_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing a masked token (used by masked-language modeling pretraining objectives, like
BERT).
additional_special_tokens (tuple or list of :obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A tuple or a list of additional special tokens.
""" """
SPECIAL_TOKENS_ATTRIBUTES = [ SPECIAL_TOKENS_ATTRIBUTES = [
...@@ -613,36 +693,44 @@ class SpecialTokensMixin: ...@@ -613,36 +693,44 @@ class SpecialTokensMixin:
) )
def sanitize_special_tokens(self) -> int: def sanitize_special_tokens(self) -> int:
""" Make sure that all the special tokens attributes of the tokenizer (tokenizer.mask_token, tokenizer.cls_token, ...) """
are in the vocabulary. Add the missing ones to the vocabulary if needed. Make sure that all the special tokens attributes of the tokenizer (:obj:`tokenizer.mask_token`,
:obj:`tokenizer.cls_token`, etc.) are in the vocabulary.
Add the missing ones to the vocabulary if needed.
Return: Return:
Number of tokens added in the vocaulary during the operation. :obj:`int`: The number of tokens added in the vocaulary during the operation.
""" """
return self.add_tokens(self.all_special_tokens_extended, special_tokens=True) return self.add_tokens(self.all_special_tokens_extended, special_tokens=True)
def add_special_tokens(self, special_tokens_dict: Dict[str, Union[str, AddedToken]]) -> int: def add_special_tokens(self, special_tokens_dict: Dict[str, Union[str, AddedToken]]) -> int:
""" """
Add a dictionary of special tokens (eos, pad, cls...) to the encoder and link them Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If
to class attributes. If special tokens are NOT in the vocabulary, they are added special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the
to it (indexed starting from the last index of the current vocabulary). current vocabulary).
Using `add_special_tokens` will ensure your special tokens can be used in several ways: Using : obj:`add_special_tokens` will ensure your special tokens can be used in several ways:
- special tokens are carefully handled by the tokenizer (they are never split) - Special tokens are carefully handled by the tokenizer (they are never split).
- you can easily refer to special tokens using tokenizer class attributes like `tokenizer.cls_token`. This makes it easy to develop model-agnostic training and fine-tuning scripts. - You can easily refer to special tokens using tokenizer class attributes like :obj:`tokenizer.cls_token`. This
makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (ex: BertTokenizer cls_token is already registered to be '[CLS]' and XLM's one is also registered to be '</s>') When possible, special tokens are already registered for provided pretrained models (for instance
:class:`~transformers.BertTokenizer` :obj:`cls_token` is already registered to be :obj`'[CLS]'` and XLM's one
is also registered to be :obj:`'</s>'`).
Args: Args:
special_tokens_dict: dict of string. Keys should be in the list of predefined special attributes: special_tokens_dict (dictionary `str` to `str` or :obj:`tokenizers.AddedToken`):
[``bos_token``, ``eos_token``, ``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``, ``mask_token``, Keys should be in the list of predefined special attributes: [``bos_token``, ``eos_token``,
``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``, ``mask_token``,
``additional_special_tokens``]. ``additional_special_tokens``].
Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer assign the index of the ``unk_token`` to them). Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer
assign the index of the ``unk_token`` to them).
Returns: Returns:
Number of tokens added to the vocabulary. :obj:`int`: Number of tokens added to the vocabulary.
Examples:: Examples::
...@@ -654,7 +742,8 @@ class SpecialTokensMixin: ...@@ -654,7 +742,8 @@ class SpecialTokensMixin:
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print('We have added', num_added_toks, 'tokens') print('We have added', num_added_toks, 'tokens')
model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer. # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
assert tokenizer.cls_token == '<CLS>' assert tokenizer.cls_token == '<CLS>'
""" """
...@@ -682,24 +771,27 @@ class SpecialTokensMixin: ...@@ -682,24 +771,27 @@ class SpecialTokensMixin:
return added_tokens return added_tokens
def add_tokens(self, new_tokens: Union[str, AddedToken, List[str], List[AddedToken]], special_tokens=False) -> int: def add_tokens(
self, new_tokens: Union[str, AddedToken, List[Union[str, AddedToken]]], special_tokens: bool = False
) -> int:
""" """
Add a list of new tokens to the tokenizer class. If the new tokens are not in the Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to
vocabulary, they are added to it with indices starting from length of the current vocabulary. it with indices starting from length of the current vocabulary.
Args: Args:
new_tokens: string or list of string or :class:`~transformers.AddedToken`. Each string is a token to add. new_tokens (:obj:`str`, :obj:`tokenizers.AddedToken` or a list of `str` or :obj:`tokenizers.AddedToken`):
Tokens are only added if they are not already in the vocabulary. AddedToken wrap a string token to Tokens are only added if they are not already in the vocabulary. :obj:`tokenizers.AddedToken` wraps a
let you personnalize it's behavior (Whether this token should only match against single word, whether string token to let you personalize its behavior: whether this token should only match against a single
this token should strip all potential whitespaces on the left side, Whether this token should strip word, whether this token should strip all potential whitespaces on the left side, whether this token
all potential whitespaces on the right side...). should strip all potential whitespaces on the right side, etc.
special_token: can be used to specify if the token is a special token. This mostly change the normalization special_token (:obj:`bool`, `optional`, defaults to :obj:`False`):
behavior (special tokens like CLS or [MASK] are usually not lower-cased for instance) Can be used to specify if the token is a special token. This mostly change the normalization behavior
(special tokens like CLS or [MASK] are usually not lower-cased for instance).
See details for :class:`~transformers.AddedToken` in HuggingFace tokenizers library. See details for :obj:`tokenizers.AddedToken` in HuggingFace tokenizers library.
Returns: Returns:
Number of tokens added to the vocabulary. :obj:`int`: Number of tokens added to the vocabulary.
Examples:: Examples::
...@@ -709,7 +801,8 @@ class SpecialTokensMixin: ...@@ -709,7 +801,8 @@ class SpecialTokensMixin:
num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2']) num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2'])
print('We have added', num_added_toks, 'tokens') print('We have added', num_added_toks, 'tokens')
model.resize_token_embeddings(len(tokenizer)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer. # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
""" """
if not new_tokens: if not new_tokens:
return 0 return 0
...@@ -720,64 +813,84 @@ class SpecialTokensMixin: ...@@ -720,64 +813,84 @@ class SpecialTokensMixin:
return self._add_tokens(new_tokens, special_tokens=special_tokens) return self._add_tokens(new_tokens, special_tokens=special_tokens)
@property @property
def bos_token(self): def bos_token(self) -> str:
""" Beginning of sentence token (string). Log an error if used while not having been set. """ """
:obj:`str`: Beginning of sentence token. Log an error if used while not having been set.
"""
if self._bos_token is None and self.verbose: if self._bos_token is None and self.verbose:
logger.error("Using bos_token, but it is not set yet.") logger.error("Using bos_token, but it is not set yet.")
return None return None
return str(self._bos_token) return str(self._bos_token)
@property @property
def eos_token(self): def eos_token(self) -> str:
""" End of sentence token (string). Log an error if used while not having been set. """ """
:obj:`str`: End of sentence token. Log an error if used while not having been set.
"""
if self._eos_token is None and self.verbose: if self._eos_token is None and self.verbose:
logger.error("Using eos_token, but it is not set yet.") logger.error("Using eos_token, but it is not set yet.")
return None return None
return str(self._eos_token) return str(self._eos_token)
@property @property
def unk_token(self): def unk_token(self) -> str:
""" Unknown token (string). Log an error if used while not having been set. """ """
:obj:`str`: Unknown token. Log an error if used while not having been set.
"""
if self._unk_token is None and self.verbose: if self._unk_token is None and self.verbose:
logger.error("Using unk_token, but it is not set yet.") logger.error("Using unk_token, but it is not set yet.")
return None return None
return str(self._unk_token) return str(self._unk_token)
@property @property
def sep_token(self): def sep_token(self) -> str:
""" Separation token (string). E.g. separate context and query in an input sequence. Log an error if used while not having been set. """ """
:obj:`str`: Separation token, to separate context and query in an input sequence.
Log an error if used while not having been set.
"""
if self._sep_token is None and self.verbose: if self._sep_token is None and self.verbose:
logger.error("Using sep_token, but it is not set yet.") logger.error("Using sep_token, but it is not set yet.")
return None return None
return str(self._sep_token) return str(self._sep_token)
@property @property
def pad_token(self): def pad_token(self) -> str:
""" Padding token (string). Log an error if used while not having been set. """ """
:obj:`str`: Padding token. Log an error if used while not having been set.
"""
if self._pad_token is None and self.verbose: if self._pad_token is None and self.verbose:
logger.error("Using pad_token, but it is not set yet.") logger.error("Using pad_token, but it is not set yet.")
return None return None
return str(self._pad_token) return str(self._pad_token)
@property @property
def cls_token(self): def cls_token(self) -> str:
""" Classification token (string). E.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set. """ """
:obj:`str`: Classification token, to extract a summary of an input sequence leveraging self-attention along
the full depth of the model. Log an error if used while not having been set.
"""
if self._cls_token is None and self.verbose: if self._cls_token is None and self.verbose:
logger.error("Using cls_token, but it is not set yet.") logger.error("Using cls_token, but it is not set yet.")
return None return None
return str(self._cls_token) return str(self._cls_token)
@property @property
def mask_token(self): def mask_token(self) -> str:
""" Mask token (string). E.g. when training a model with masked-language modeling. Log an error if used while not having been set. """ """
:obj:`str`: Mask token, to use when training a model with masked-language modeling. Log an error if used while
not having been set.
"""
if self._mask_token is None and self.verbose: if self._mask_token is None and self.verbose:
logger.error("Using mask_token, but it is not set yet.") logger.error("Using mask_token, but it is not set yet.")
return None return None
return str(self._mask_token) return str(self._mask_token)
@property @property
def additional_special_tokens(self): def additional_special_tokens(self) -> List[str]:
""" All the additional special tokens you may want to use (list of strings). Log an error if used while not having been set. """ """
:obj:`List[str]`: All the additional special tokens you may want to use. Log an error if used while not having
been set.
"""
if self._additional_special_tokens is None and self.verbose: if self._additional_special_tokens is None and self.verbose:
logger.error("Using additional_special_tokens, but it is not set yet.") logger.error("Using additional_special_tokens, but it is not set yet.")
return None return None
...@@ -816,70 +929,99 @@ class SpecialTokensMixin: ...@@ -816,70 +929,99 @@ class SpecialTokensMixin:
self._additional_special_tokens = value self._additional_special_tokens = value
@property @property
def bos_token_id(self): def bos_token_id(self) -> Optional[int]:
""" Id of the beginning of sentence token in the vocabulary. Log an error if used while not having been set. """ """
:obj:`Optional[int]`: Id of the beginning of sentence token in the vocabulary. Returns :obj:`None` if the token
has not been set.
"""
if self._bos_token is None: if self._bos_token is None:
return None return None
return self.convert_tokens_to_ids(self.bos_token) return self.convert_tokens_to_ids(self.bos_token)
@property @property
def eos_token_id(self): def eos_token_id(self) -> Optional[int]:
""" Id of the end of sentence token in the vocabulary. Log an error if used while not having been set. """ """
:obj:`Optional[int]`: Id of the end of sentence token in the vocabulary. Returns :obj:`None` if the token has
not been set.
"""
if self._eos_token is None: if self._eos_token is None:
return None return None
return self.convert_tokens_to_ids(self.eos_token) return self.convert_tokens_to_ids(self.eos_token)
@property @property
def unk_token_id(self): def unk_token_id(self) -> Optional[int]:
""" Id of the unknown token in the vocabulary. Log an error if used while not having been set. """ """
:obj:`Optional[int]`: Id of the unknown token in the vocabulary. Returns :obj:`None` if the token has not been
set.
"""
if self._unk_token is None: if self._unk_token is None:
return None return None
return self.convert_tokens_to_ids(self.unk_token) return self.convert_tokens_to_ids(self.unk_token)
@property @property
def sep_token_id(self): def sep_token_id(self) -> Optional[int]:
""" Id of the separation token in the vocabulary. E.g. separate context and query in an input sequence. Log an error if used while not having been set. """ """
:obj:`Optional[int]`: Id of the separation token in the vocabulary, to separate context and query in an input
sequence. Returns :obj:`None` if the token has not been set.
"""
if self._sep_token is None: if self._sep_token is None:
return None return None
return self.convert_tokens_to_ids(self.sep_token) return self.convert_tokens_to_ids(self.sep_token)
@property @property
def pad_token_id(self): def pad_token_id(self) -> Optional[int]:
""" Id of the padding token in the vocabulary. Log an error if used while not having been set. """ """
:obj:`Optional[int]`: Id of the padding token in the vocabulary. Returns :obj:`None` if the token has not been
set.
"""
if self._pad_token is None: if self._pad_token is None:
return None return None
return self.convert_tokens_to_ids(self.pad_token) return self.convert_tokens_to_ids(self.pad_token)
@property @property
def pad_token_type_id(self): def pad_token_type_id(self) -> int:
""" Id of the padding token type in the vocabulary.""" """
:obj:`int`: Id of the padding token type in the vocabulary.
"""
return self._pad_token_type_id return self._pad_token_type_id
@property @property
def cls_token_id(self): def cls_token_id(self) -> Optional[int]:
""" Id of the classification token in the vocabulary. E.g. to extract a summary of an input sequence leveraging self-attention along the full depth of the model. Log an error if used while not having been set. """ """
:obj:`Optional[int]`: Id of the classification token in the vocabulary, to extract a summary of an input
sequence leveraging self-attention along the full depth of the model.
Returns :obj:`None` if the token has not been set.
"""
if self._cls_token is None: if self._cls_token is None:
return None return None
return self.convert_tokens_to_ids(self.cls_token) return self.convert_tokens_to_ids(self.cls_token)
@property @property
def mask_token_id(self): def mask_token_id(self) -> Optional[int]:
""" Id of the mask token in the vocabulary. E.g. when training a model with masked-language modeling. Log an error if used while not having been set. """ """
:obj:`Optional[int]`: Id of the mask token in the vocabulary, used when training a model with masked-language
modeling. Returns :obj:`None` if the token has not been set.
"""
if self._mask_token is None: if self._mask_token is None:
return None return None
return self.convert_tokens_to_ids(self.mask_token) return self.convert_tokens_to_ids(self.mask_token)
@property @property
def additional_special_tokens_ids(self): def additional_special_tokens_ids(self) -> List[int]:
""" Ids of all the additional special tokens in the vocabulary (list of integers). Log an error if used while not having been set. """ """
:obj:`List[int]`: Ids of all the additional special tokens in the vocabulary.
Log an error if used while not having been set.
"""
return self.convert_tokens_to_ids(self.additional_special_tokens) return self.convert_tokens_to_ids(self.additional_special_tokens)
@property @property
def special_tokens_map(self): def special_tokens_map(self) -> Dict[str, Union[str, List[str]]]:
""" A dictionary mapping special token class attribute (cls_token, unk_token...) to their """
values ('<unk>', '<cls>'...) :obj:`Dict[str, Union[str, List[str]]]`: A dictionary mapping special token class attributes
Convert tokens of AddedToken type in string. (:obj:`cls_token`, :obj:`unk_token`, etc.) to their values (:obj:`'<unk>'`, :obj:`'<cls>'`, etc.).
All returned tokens are strings
Convert potential tokens of :obj:`tokenizers.AddedToken` type to string.
""" """
set_attr = {} set_attr = {}
for attr in self.SPECIAL_TOKENS_ATTRIBUTES: for attr in self.SPECIAL_TOKENS_ATTRIBUTES:
...@@ -889,12 +1031,14 @@ class SpecialTokensMixin: ...@@ -889,12 +1031,14 @@ class SpecialTokensMixin:
return set_attr return set_attr
@property @property
def special_tokens_map_extended(self): def special_tokens_map_extended(self) -> Dict[str, Union[str, AddedToken, List[Union[str, AddedToken]]]]:
""" A dictionary mapping special token class attribute (cls_token, unk_token...) to their """
values ('<unk>', '<cls>'...) :obj:`Dict[str, Union[str, tokenizers.AddedToken, List[Union[str, tokenizers.AddedToken]]]]`: A dictionary
Keep the tokens as AddedToken if they are of this type. mapping special token class attributes (:obj:`cls_token`, :obj:`unk_token`, etc.) to their values
(:obj:`'<unk>'`, :obj:`'<cls>'`, etc.).
AddedToken can be used to control more finely how special tokens are tokenized. Don't convert tokens of :obj:`tokenizers.AddedToken` type to string so they can be used to control more finely
how special tokens are tokenized.
""" """
set_attr = {} set_attr = {}
for attr in self.SPECIAL_TOKENS_ATTRIBUTES: for attr in self.SPECIAL_TOKENS_ATTRIBUTES:
...@@ -904,21 +1048,23 @@ class SpecialTokensMixin: ...@@ -904,21 +1048,23 @@ class SpecialTokensMixin:
return set_attr return set_attr
@property @property
def all_special_tokens(self): def all_special_tokens(self) -> List[str]:
""" List all the special tokens ('<unk>', '<cls>'...) mapped to class attributes """
Convert tokens of AddedToken type in string. :obj:`List[str]`: All the special tokens (:obj:`'<unk>'`, :obj:`'<cls>'`, etc.) mapped to class attributes.
All returned tokens are strings
(cls_token, unk_token...). Convert tokens of :obj:`tokenizers.AddedToken` type to string.
""" """
all_toks = [str(s) for s in self.all_special_tokens_extended] all_toks = [str(s) for s in self.all_special_tokens_extended]
return all_toks return all_toks
@property @property
def all_special_tokens_extended(self): def all_special_tokens_extended(self) -> List[Union[str, AddedToken]]:
""" List all the special tokens ('<unk>', '<cls>'...) mapped to class attributes """
Keep the tokens as AddedToken if they are of this type. :obj:`List[Union[str, tokenizers.AddedToken]]`: All the special tokens (:obj:`'<unk>'`, :obj:`'<cls>'`, etc.)
mapped to class attributes.
AddedToken can be used to control more finely how special tokens are tokenized. Don't convert tokens of :obj:`tokenizers.AddedToken` type to string so they can be used to control more finely
how special tokens are tokenized.
""" """
all_toks = [] all_toks = []
set_attr = self.special_tokens_map_extended set_attr = self.special_tokens_map_extended
...@@ -928,9 +1074,10 @@ class SpecialTokensMixin: ...@@ -928,9 +1074,10 @@ class SpecialTokensMixin:
return all_toks return all_toks
@property @property
def all_special_ids(self): def all_special_ids(self) -> List[int]:
""" List the vocabulary indices of the special tokens ('<unk>', '<cls>'...) mapped to """
class attributes (cls_token, unk_token...). :obj:`List[int]`: List the ids of the special tokens(:obj:`'<unk>'`, :obj:`'<cls>'`, etc.) mapped to class
attributes.
""" """
all_toks = self.all_special_tokens all_toks = self.all_special_tokens
all_ids = self.convert_tokens_to_ids(all_toks) all_ids = self.convert_tokens_to_ids(all_toks)
...@@ -939,96 +1086,181 @@ class SpecialTokensMixin: ...@@ -939,96 +1086,181 @@ class SpecialTokensMixin:
ENCODE_KWARGS_DOCSTRING = r""" ENCODE_KWARGS_DOCSTRING = r"""
add_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`True`): add_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`True`):
If set to ``True``, the sequences will be encoded with the special tokens relative Whether or not to encode the sequences with the special tokens relative to their model.
to their model. padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`False`):
`padding` (:obj:`Union[bool, str]`, `optional`, defaults to :obj:`False`): Activates and controls padding. Accepts the following values:
Activate and control padding. Accepts the following values:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a
* `True` or `'longest'`: pad to the longest sequence in the batch (or no padding if only a single sequence if provided), single sequence if provided).
* `'max_length'`: pad to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`) * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
* `False` or `'do_not_pad'` (default): No padding (i.e. can output batch with sequences of uneven lengths) maximum acceptable input length for the model if that argument is not provided.
`truncation` (:obj:`Union[bool, str]`, `optional`, defaults to :obj:`False`): * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
Activate and control truncation. Accepts the following values: different lengths).
truncation (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.TruncationStrategy`, `optional`, defaults to :obj:`False`):
* `True` or `'longest_first'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided, Activates and controls truncation. Accepts the following values:
* `'only_first'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided,
* `'only_second'`: truncate to a max length specified in `max_length` or to the max acceptable input length for the model if no length is provided (`max_length=None`). This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided, * :obj:`True` or :obj:`'longest_first'`: Truncate to a maximum length specified with the argument
* `False` or `'do_not_truncate'` (default): No truncation (i.e. can output batch with sequences length greater than the model max admissible input size) :obj:`max_length` or to the maximum acceptable input length for the model if that argument is not
`max_length` (:obj:`Union[int, None]`, `optional`, defaults to :obj:`None`): provided. This will truncate token by token, removing a token from the longest sequence in the pair
Control the length for padding/truncation. Accepts the following values if a pair of sequences (or a batch of pairs) is provided.
* :obj:`'only_first'`: Truncate to a maximum length specified with the argument :obj:`max_length` or to
* `None` (default): This will use the predefined model max length if required by one of the truncation/padding parameters. If the model has no specific max input length (e.g. XLNet) truncation/padding to max length is deactivated. the maximum acceptable input length for the model if that argument is not provided. This will only
* `any integer value` (e.g. `42`): Use this specific maximum length value if required by one of the truncation/padding parameters. truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
stride (:obj:`int`, `optional`, defaults to ``0``): * :obj:`'only_second'`: Truncate to a maximum length specified with the argument :obj:`max_length` or
If set to a number along with max_length, the overflowing tokens returned when `return_overflowing_tokens=True` to the maximum acceptable input length for the model if that argument is not provided. This will only
will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflow ing sequences. truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
The value of this argument defines the number of overlapping tokens. * :obj:`False` or :obj:`'do_not_truncate'` (default): No truncation (i.e., can output batch with
is_pretokenized (:obj:`bool`, defaults to :obj:`False`): sequence lengths greater than the model maximum admissible input size).
Set to True to indicate the input is already tokenized max_length (:obj:`int`, `optional`):
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. Controls the maximum length to use by one of the truncation/padding parameters.
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability
>= 7.5 (Volta). If left unset or set to :obj:`None`, this will use the predefined model maximum length if a maximum
return_tensors (:obj:`str`, `optional`, defaults to :obj:`None`): length is required by one of the truncation/padding parameters. If the model has no specific maximum
Can be set to 'tf', 'pt' or 'np' to return respectively TensorFlow :obj:`tf.constant`, input length (like XLNet) truncation/padding to a maximum length will be deactivated.
PyTorch :obj:`torch.Tensor` or Numpy :obj: `np.ndarray` instead of a list of python integers. stride (:obj:`int`, `optional`, defaults to 0):
If set to a number along with :obj:`max_length`, the overflowing tokens returned when
:obj:`return_overflowing_tokens=True` will contain some tokens from the end of the truncated sequence
returned to provide some overlap between truncated and overflowing sequences. The value of this
argument defines the number of overlapping tokens.
is_pretokenized (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not the input is already tokenized.
pad_to_multiple_of (:obj:`int`, `optional`):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
return_tensors (:obj:`str` or :class:`~transformers.tokenization_utils_base.TensorType`, `optional`):
If set, will return tensors instead of list of python integers. Acceptable values are:
* :obj:`'tf'`: Return TensorFlow :obj:`tf.constant` objects.
* :obj:`'pt'`: Return PyTorch :obj:`torch.Tensor` objects.
* :obj:`'np'`: Return Numpy :obj:`np.ndarray` objects.
""" """
ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r""" ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING = r"""
return_token_type_ids (:obj:`bool`, `optional`, defaults to :obj:`None`): return_token_type_ids (:obj:`bool`, `optional`):
Whether to return token type IDs. If left to the default, will return the token type IDs according Whether to return token type IDs. If left to the default, will return the token type IDs according
to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute. to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute.
`What are token type IDs? <../glossary.html#token-type-ids>`_ `What are token type IDs? <../glossary.html#token-type-ids>`__
return_attention_mask (:obj:`bool`, `optional`, defaults to :obj:`none`): return_attention_mask (:obj:`bool`, `optional`):
Whether to return the attention mask. If left to the default, will return the attention mask according Whether to return the attention mask. If left to the default, will return the attention mask according
to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute. to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
return_overflowing_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`): return_overflowing_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True to return overflowing token sequences (default False). Whether or not to return overflowing token sequences.
return_special_tokens_mask (:obj:`bool`, `optional`, defaults to :obj:`False`): return_special_tokens_mask (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True to return special tokens mask information (default False). Wheter or not to return special tokens mask information.
return_offsets_mapping (:obj:`bool`, `optional`, defaults to :obj:`False`): return_offsets_mapping (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True to return (char_start, char_end) for each token (default False). Whether or not to return :obj:`(char_start, char_end)` for each token.
If using Python's tokenizer, this method will raise NotImplementedError.
This one is only available on fast tokenizers inheriting from PreTrainedTokenizerFast. This is only available on fast tokenizers inheriting from
**kwargs: passed to the `self.tokenize()` method :class:`~transformers.PreTrainedTokenizerFast`, if using Python's tokenizer, this method will raise
:obj:`NotImplementedError`.
return_length (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to return the lengths of the encoded inputs.
verbose (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to print informations and warnings.
**kwargs: passed to the :obj:`self.tokenize()` method
Return: Return:
A Dictionary of shape:: :class:`~transformers.BatchEncoding`: A :class:`~transformers.BatchEncoding` with the following fields:
{ - **input_ids** -- List of token ids to be fed to a model.
input_ids: list[int],
token_type_ids: list[int] if return_token_type_ids is True (default)
attention_mask: list[int] if return_attention_mask is True (default)
overflowing_tokens: list[int] if the tokenizer is a slow tokenize, else a List[List[int]] if a ``max_length`` is specified and ``return_overflowing_tokens=True``
special_tokens_mask: list[int] if ``add_special_tokens`` if set to ``True``
and return_special_tokens_mask is True
}
With the fields: `What are input IDs? <../glossary.html#input-ids>`__
- **token_type_ids** -- List of token type ids to be fed to a model (when :obj:`return_token_type_ids=True`
or if `"token_type_ids"` is in :obj:`self.model_input_names`).
- ``input_ids``: list of token ids to be fed to a model `What are token type IDs? <../glossary.html#token-type-ids>`__
- ``token_type_ids``: list of token type ids to be fed to a model - **attention_mask** -- List of indices specifying which tokens should be attended to by the model (when
- ``attention_mask``: list of indices specifying which tokens should be attended to by the model :obj:`return_attention_mask=True` or if `"attention_mask"` is in :obj:`self.model_input_names`).
- ``overflowing_tokens``: list of overflowing tokens sequences if a max length is specified and ``return_overflowing_tokens=True``.
- ``special_tokens_mask``: if adding special tokens, this is a list of [0, 1], with 0 specifying special added `What are attention masks? <../glossary.html#attention-mask>`__
tokens and 1 specifying sequence tokens. - **overflowing_tokens** -- List of overflowing tokens sequences (when a :obj:`max_length` is specified and
:obj:`return_overflowing_tokens=True`).
- **num_truncated_tokens** -- Number of tokens truncated (when a :obj:`max_length` is specified and
:obj:`return_overflowing_tokens=True`).
- **special_tokens_mask** -- List of 0s and 1s, with 0 specifying added special tokens and 1 specifying
regual sequence tokens (when :obj:`add_special_tokens=True` and :obj:`return_special_tokens_mask=True`).
- **length** -- The length of the inputs (when :obj:`return_length=True`)
"""
INIT_TOKENIZER_DOCSTRING = r"""
Class attributes (overridden by derived classes)
- **vocab_files_names** (:obj:`Dict[str, str]`) -- A ditionary with, as keys, the ``__init__`` keyword name of
each vocabulary file required by the model, and as associated values, the filename for saving the associated
file (string).
- **pretrained_vocab_files_map** (:obj:`Dict[str, Dict[str, str]]`) -- A dictionary of dictionaries, with the
high-level keys being the ``__init__`` keyword name of each vocabulary file required by the model, the
low-level being the :obj:`short-cut-names` of the pretrained models with, as associated values, the
:obj:`url` to the associated pretrained vocabulary file.
- **max_model_input_sizes** (:obj:`Dict[str, Optinal[int]]`) -- A dictionary with, as keys, the
:obj:`short-cut-names` of the pretrained models, and as associated values, the maximum length of the sequence
inputs of this model, or :obj:`None` if the model has no maximum input size.
- **pretrained_init_configuration** (:obj:`Dict[str, Dict[str, Any]]`) -- A dictionary with, as keys, the
:obj:`short-cut-names` of the pretrained models, and as associated values, a dictionnary of specific
arguments to pass to the ``__init__`` method of the tokenizer class for this pretrained model when loading the
tokenizer with the :meth:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`
method.
- **model_input_names** (:obj:`List[str]`) -- A list of inputs expected in the forward pass of the model.
- **padding_side** (:obj:`str`) -- The default value for the side on which the model should have padding
applied. Should be :obj:`'right'` or :obj:`'left'`.
Args:
model_max_length (:obj:`int`, `optional`):
The maximum length (in number of tokens) for the inputs to the transformer model.
When the tokenizer is loaded with
:meth:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained`, this will be set to
the value stored for the associated model in ``max_model_input_sizes`` (see above). If no value is
provided, will default to VERY_LARGE_INTEGER (:obj:`int(1e30)`).
padding_side: (:obj:`str`, `optional`):
The side on which the model should have padding applied. Should be selected between ['right', 'left'].
Default value is picked from the class attribute of the same name.
model_input_names (:obj:`List[string]`, `optional`):
The list of inputs accepted by the forward pass of the model (like :obj:`"token_type_ids"` or
:obj:`"attention_mask"`). Default value is picked from the class attribute of the same name.
bos_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing the beginning of a sentence. Will be associated to ``self.bos_token`` and
``self.bos_token_id``.
eos_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing the end of a sentence. Will be associated to ``self.eos_token`` and
``self.eos_token_id``.
unk_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing an out-of-vocabulary token. Will be associated to ``self.unk_token`` and
``self.unk_token_id``.
sep_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token separating two different sentences in the same input (used by BERT for instance). Will be
associated to ``self.sep_token`` and ``self.sep_token_id``.
pad_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token used to make arrays of tokens the same size for batching purpose. Will then be ignored by
attention mechanisms or loss computation. Will be associated to ``self.pad_token`` and
``self.pad_token_id``.
cls_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing the class of the input (used by BERT for instance). Will be associated to
``self.cls_token`` and ``self.cls_token_id``.
mask_token (:obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A special token representing a masked token (used by masked-language modeling pretraining objectives, like
BERT). Will be associated to ``self.mask_token`` and ``self.mask_token_id``.
additional_special_tokens (tuple or list of :obj:`str` or :obj:`tokenizers.AddedToken`, `optional`):
A tuple or a list of additional special tokens. Add them here to ensure they won't be split by the
tokenization process. Will be associated to ``self.additional_special_tokens`` and
``self.additional_special_tokens_ids``.
""" """
@add_end_docstrings(INIT_TOKENIZER_DOCSTRING)
class PreTrainedTokenizerBase(SpecialTokensMixin): class PreTrainedTokenizerBase(SpecialTokensMixin):
""" Base class for slow and fast tokenizers. """
Base class for :class:`~transformers.PreTrainedTokenizer` and :class:`~transformers.PreTrainedTokenizerFast`.
Handle shared (mostly boiler plate) methods for slow and fast tokenizers. Handles shared (mostly boiler plate) methods for those two classes.
""" """
vocab_files_names: Dict[str, str] = {} vocab_files_names: Dict[str, str] = {}
pretrained_vocab_files_map: Dict[str, Dict[str, str]] = {} pretrained_vocab_files_map: Dict[str, Dict[str, str]] = {}
pretrained_init_configuration: Dict[str, Dict[str, Any]] = {} pretrained_init_configuration: Dict[str, Dict[str, Any]] = {}
max_model_input_sizes: Dict[str, int] = {} max_model_input_sizes: Dict[str, Optional[int]] = {}
model_input_names: List[str] = ["token_type_ids", "attention_mask"] model_input_names: List[str] = ["token_type_ids", "attention_mask"]
padding_side: str = "right" padding_side: str = "right"
def __init__(self, **kwargs): def __init__(self, **kwargs):
...@@ -1052,22 +1284,33 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1052,22 +1284,33 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
@property @property
def max_len(self) -> int: def max_len(self) -> int:
""" Kept here for backward compatibility.
Now renamed to `model_max_length` to avoid ambiguity.
""" """
:obj:`int`: **Deprecated** Kept here for backward compatibility. Now renamed to :obj:`model_max_length` to
avoid ambiguity.
"""
warnings.warn(
"The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.",
FutureWarning,
)
return self.model_max_length return self.model_max_length
@property @property
def max_len_single_sentence(self) -> int: def max_len_single_sentence(self) -> int:
"""
:obj:`int`: The maximum length of a sentence that can be fed to the model.
"""
return self.model_max_length - self.num_special_tokens_to_add(pair=False) return self.model_max_length - self.num_special_tokens_to_add(pair=False)
@property @property
def max_len_sentences_pair(self) -> int: def max_len_sentences_pair(self) -> int:
"""
:obj:`int`: The maximum combined length of a pair of sentences that can be fed to the model.
"""
return self.model_max_length - self.num_special_tokens_to_add(pair=True) return self.model_max_length - self.num_special_tokens_to_add(pair=True)
@max_len_single_sentence.setter @max_len_single_sentence.setter
def max_len_single_sentence(self, value) -> int: def max_len_single_sentence(self, value) -> int:
""" For backward compatibility, allow to try to setup 'max_len_single_sentence' """ # For backward compatibility, allow to try to setup 'max_len_single_sentence'.
if value == self.model_max_length - self.num_special_tokens_to_add(pair=False) and self.verbose: if value == self.model_max_length - self.num_special_tokens_to_add(pair=False) and self.verbose:
logger.warning( logger.warning(
"Setting 'max_len_single_sentence' is now deprecated. " "This value is automatically set up." "Setting 'max_len_single_sentence' is now deprecated. " "This value is automatically set up."
...@@ -1079,7 +1322,7 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1079,7 +1322,7 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
@max_len_sentences_pair.setter @max_len_sentences_pair.setter
def max_len_sentences_pair(self, value) -> int: def max_len_sentences_pair(self, value) -> int:
""" For backward compatibility, allow to try to setup 'max_len_sentences_pair' """ # For backward compatibility, allow to try to setup 'max_len_sentences_pair'.
if value == self.model_max_length - self.num_special_tokens_to_add(pair=True) and self.verbose: if value == self.model_max_length - self.num_special_tokens_to_add(pair=True) and self.verbose:
logger.warning( logger.warning(
"Setting 'max_len_sentences_pair' is now deprecated. " "This value is automatically set up." "Setting 'max_len_sentences_pair' is now deprecated. " "This value is automatically set up."
...@@ -1092,37 +1335,46 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1092,37 +1335,46 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
@classmethod @classmethod
def from_pretrained(cls, *inputs, **kwargs): def from_pretrained(cls, *inputs, **kwargs):
r""" r"""
Instantiate a :class:`~transformers.PreTrainedTokenizer` (or a derived class) from a predefined tokenizer. Instantiate a :class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase` (or a derived class) from
a predefined tokenizer.
Args: Args:
pretrained_model_name_or_path: either: pretrained_model_name_or_path (:obj:`str`):
Can be either:
- a string with the `shortcut name` of a predefined tokenizer to load from cache or download, e.g.: ``bert-base-uncased``.
- a string with the `identifier name` of a predefined tokenizer that was user-uploaded to our S3, e.g.: ``dbmdz/bert-base-german-cased``. - A string with the `shortcut name` of a predefined tokenizer to load from cache or download, e.g.,
- a path to a `directory` containing vocabulary files required by the tokenizer, for instance saved using the :func:`~transformers.PreTrainedTokenizer.save_pretrained` method, e.g.: ``./my_model_directory/``. ``bert-base-uncased``.
- (not applicable to all derived classes, deprecated) a path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (e.g. Bert, XLNet), e.g.: ``./my_model_directory/vocab.txt``. - A string with the `identifier name` of a predefined tokenizer that was user-uploaded to our S3, e.g.,
``dbmdz/bert-base-german-cased``.
cache_dir: (`optional`) string: - A path to a `directory` containing vocabulary files required by the tokenizer, for instance saved
Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used. using the :meth:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase.save_pretrained`
method, e.g., ``./my_model_directory/``.
force_download: (`optional`) boolean, default False: - (**Deprecated**, not applicable to all derived classes) A path or url to a single saved vocabulary
Force to (re-)download the vocabulary files and override the cached versions if they exists. file (if and only if the tokenizer only requires a single vocabulary file like Bert or XLNet), e.g.,
``./my_model_directory/vocab.txt``.
resume_download: (`optional`) boolean, default False: cache_dir (:obj:`str`, `optional`):
Do not delete incompletely recieved file. Attempt to resume the download if such a file exists. Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the
standard cache should not be used.
proxies: (`optional`) dict, default None: force_download (:obj:`bool`, `optional`, defaults to :obj:`False`):
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. Whether or not to force the (re-)download the vocabulary files and override the cached versions if they
The proxies are used on each request. exist.
resume_download (:obj:`bool`, `optional`, defaults to :obj:`False`):
inputs: (`optional`) positional arguments: will be passed to the Tokenizer ``__init__`` method. Whether or not to delete incompletely received files. Attempt to resume the download if such a file
exists.
kwargs: (`optional`) keyword arguments: will be passed to the Tokenizer ``__init__`` method. Can be used to set special tokens like ``bos_token``, ``eos_token``, ``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``, ``mask_token``, ``additional_special_tokens``. See parameters in the doc string of :class:`~transformers.PreTrainedTokenizer` for details. proxies (:obj:`Dict[str, str], `optional`):
A dictionary of proxy servers to use by protocol or endpoint, e.g.,
:obj:`{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}`. The proxies are used on each
request.
inputs (additional positional arguments, `optional`):
Will be passed along to the Tokenizer ``__init__`` method.
kwargs (additional keyword arguments, `optional`):
Will be passed to the Tokenizer ``__init__`` method. Can be used to set special tokens like
``bos_token``, ``eos_token``, ``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``,
``mask_token``, ``additional_special_tokens``. See parameters in the ``__init__`` for more details.
Examples:: Examples::
# We can't instantiate directly the base class `PreTrainedTokenizer` so let's show our examples on a derived class: BertTokenizer # We can't instantiate directly the base class `PreTrainedTokenizerBase` so let's show our examples on a derived class: BertTokenizer
# Download vocabulary from S3 and cache. # Download vocabulary from S3 and cache.
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
...@@ -1336,17 +1588,26 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1336,17 +1588,26 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
return tokenizer return tokenizer
def save_pretrained(self, save_directory) -> Tuple[str]: def save_pretrained(self, save_directory: str) -> Tuple[str]:
""" Save the tokenizer vocabulary files together with: """
Save the tokenizer vocabulary files together with:
- added tokens, - added tokens,
- special-tokens-to-class-attributes-mapping, - special tokens to class attributes mapping,
- tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert). - tokenizer instantiation positional and keywords inputs (e.g. do_lower_case for Bert).
Warning: This won't save modifications you may have applied to the tokenizer after the instantiation
(e.g. modifying tokenizer.do_lower_case after creation).
This method make sure the full tokenizer can then be re-loaded using the This method make sure the full tokenizer can then be re-loaded using the
:func:`~transformers.PreTrainedTokenizer.from_pretrained` class method. :meth:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase.from_pretrained` class method.
.. Warning::
This won't save modifications you may have applied to the tokenizer after the instantiation (for instance,
modifying :obj:`tokenizer.do_lower_case` after creation).
Args:
save_directory (:obj:`str`): The path to adirectory where the tokenizer will be saved.
Returns:
A tuple of :obj:`str`: The files saved.
""" """
if os.path.isfile(save_directory): if os.path.isfile(save_directory):
logger.error("Provided path ({}) should be a directory, not a file".format(save_directory)) logger.error("Provided path ({}) should be a directory, not a file".format(save_directory))
...@@ -1388,7 +1649,12 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1388,7 +1649,12 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
@add_end_docstrings( @add_end_docstrings(
ENCODE_KWARGS_DOCSTRING, ENCODE_KWARGS_DOCSTRING,
""" """
**kwargs: passed to the `self.tokenize()` method. **kwargs: Passed along to the `.tokenize()` method.
""",
"""
Returns:
:obj:`List[int]`, :obj:`torch.Tensor`, :obj:`tf.Tensor` or :obj:`np.ndarray`:
The tokenized ids of the text.
""", """,
) )
def encode( def encode(
...@@ -1396,27 +1662,27 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1396,27 +1662,27 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
text: Union[TextInput, PreTokenizedInput, EncodedInput], text: Union[TextInput, PreTokenizedInput, EncodedInput],
text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None, text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,
add_special_tokens: bool = True, add_special_tokens: bool = True,
padding: Union[bool, str] = False, padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str] = False, truncation: Union[bool, str, TruncationStrategy] = False,
max_length: Optional[int] = None, max_length: Optional[int] = None,
stride: int = 0, stride: int = 0,
return_tensors: Optional[Union[str, TensorType]] = None, return_tensors: Optional[Union[str, TensorType]] = None,
**kwargs **kwargs
): ) -> List[int]:
""" """
Converts a string in a sequence of ids (integer), using the tokenizer and vocabulary. Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.
Same as doing ``self.convert_tokens_to_ids(self.tokenize(text))``. Same as doing ``self.convert_tokens_to_ids(self.tokenize(text))``.
Args: Args:
text (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`): text (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`):
The first sequence to be encoded. This can be a string, a list of strings (tokenized string using The first sequence to be encoded. This can be a string, a list of strings (tokenized string using
the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` the ``tokenize`` method) or a list of integers (tokenized string ids using the
method) ``convert_tokens_to_ids`` method).
text_pair (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`, `optional`, defaults to :obj:`None`): text_pair (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`, `optional`):
Optional second sequence to be encoded. This can be a string, a list of strings (tokenized Optional second sequence to be encoded. This can be a string, a list of strings (tokenized
string using the `tokenize` method) or a list of integers (tokenized string ids using the string using the ``tokenize`` method) or a list of integers (tokenized string ids using the
`convert_tokens_to_ids` method) ``convert_tokens_to_ids`` method).
""" """
encoded_inputs = self.encode_plus( encoded_inputs = self.encode_plus(
text, text,
...@@ -1438,7 +1704,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1438,7 +1704,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
def _get_padding_truncation_strategies( def _get_padding_truncation_strategies(
self, padding=False, truncation=False, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs self, padding=False, truncation=False, max_length=None, pad_to_multiple_of=None, verbose=True, **kwargs
): ):
""" Find the correct padding/truncation strategy with backward compatibility """
Find the correct padding/truncation strategy with backward compatibility
for old arguments (truncation_strategy and pad_to_max_length) and behaviors. for old arguments (truncation_strategy and pad_to_max_length) and behaviors.
""" """
old_truncation_strategy = kwargs.pop("truncation_strategy", "do_not_truncate") old_truncation_strategy = kwargs.pop("truncation_strategy", "do_not_truncate")
...@@ -1558,8 +1825,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1558,8 +1825,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]], text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]],
text_pair: Optional[Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]]] = None, text_pair: Optional[Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]]] = None,
add_special_tokens: bool = True, add_special_tokens: bool = True,
padding: Union[bool, str] = False, padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str] = False, truncation: Union[bool, str, TruncationStrategy] = False,
max_length: Optional[int] = None, max_length: Optional[int] = None,
stride: int = 0, stride: int = 0,
is_pretokenized: bool = False, is_pretokenized: bool = False,
...@@ -1575,20 +1842,20 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1575,20 +1842,20 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
**kwargs **kwargs
) -> BatchEncoding: ) -> BatchEncoding:
""" """
Returns a dictionary containing the encoded sequence or sequence pair and additional information: Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
the mask for sequence classification and the overflowing elements if a ``max_length`` is specified. sequences.
Args: Args:
text (:obj:`str`, :obj:`List[str]`, :obj:`List[List[str]]``): text (:obj:`str`, :obj:`List[str]`, :obj:`List[List[str]]`):
The sequence or batch of sequences to be encoded. The sequence or batch of sequences to be encoded.
Each sequence can be a string or a list of strings (pre-tokenized string). Each sequence can be a string or a list of strings (pretokenized string).
If the sequences are provided as list of strings (pretokenized), you must set `is_pretokenized=True` If the sequences are provided as list of strings (pretokenized), you must set
(to lift the ambiguity with a batch of sequences) :obj:`is_pretokenized=True` (to lift the ambiguity with a batch of sequences).
text_pair (:obj:`str`, :obj:`List[str]`, :obj:`List[List[str]]``): text_pair (:obj:`str`, :obj:`List[str]`, :obj:`List[List[str]]`):
The sequence or batch of sequences to be encoded. The sequence or batch of sequences to be encoded.
Each sequence can be a string or a list of strings (pre-tokenized string). Each sequence can be a string or a list of strings (pretokenized string).
If the sequences are provided as list of strings (pretokenized), you must set `is_pretokenized=True` If the sequences are provided as list of strings (pretokenized), you must set
(to lift the ambiguity with a batch of sequences) :obj:`is_pretokenized=True` (to lift the ambiguity with a batch of sequences).
""" """
# Input type checking for clearer error # Input type checking for clearer error
assert isinstance(text, str) or ( assert isinstance(text, str) or (
...@@ -1680,8 +1947,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1680,8 +1947,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
text: Union[TextInput, PreTokenizedInput, EncodedInput], text: Union[TextInput, PreTokenizedInput, EncodedInput],
text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None, text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None,
add_special_tokens: bool = True, add_special_tokens: bool = True,
padding: Union[bool, str] = False, padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str] = False, truncation: Union[bool, str, TruncationStrategy] = False,
max_length: Optional[int] = None, max_length: Optional[int] = None,
stride: int = 0, stride: int = 0,
is_pretokenized: bool = False, is_pretokenized: bool = False,
...@@ -1697,18 +1964,20 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1697,18 +1964,20 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
**kwargs **kwargs
) -> BatchEncoding: ) -> BatchEncoding:
""" """
Returns a dictionary containing the encoded sequence or sequence pair and additional information: Tokenize and prepare for the model a sequence or a pair of sequences.
the mask for sequence classification and the overflowing elements if a ``max_length`` is specified.
.. warning::
This method is deprecated, ``__call__`` should be used instead.
Args: Args:
text (:obj:`str`, :obj:`List[str]` or :obj:`List[int]` (the later only for not-fast tokenizers)): text (:obj:`str`, :obj:`List[str]` or :obj:`List[int]` (the latter only for not-fast tokenizers)):
The first sequence to be encoded. This can be a string, a list of strings (tokenized string using The first sequence to be encoded. This can be a string, a list of strings (tokenized string using
the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` the ``tokenize`` method) or a list of integers (tokenized string ids using the
method) ``convert_tokens_to_ids`` method).
text_pair (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`, `optional`, defaults to :obj:`None`): text_pair (:obj:`str`, :obj:`List[str]` or :obj:`List[int]`, `optional`):
Optional second sequence to be encoded. This can be a string, a list of strings (tokenized Optional second sequence to be encoded. This can be a string, a list of strings (tokenized
string using the `tokenize` method) or a list of integers (tokenized string ids using the string using the ``tokenize`` method) or a list of integers (tokenized string ids using the
`convert_tokens_to_ids` method) ``convert_tokens_to_ids`` method).
""" """
# Backward compatibility for 'truncation_strategy', 'pad_to_max_length' # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
...@@ -1777,8 +2046,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1777,8 +2046,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
List[EncodedInputPair], List[EncodedInputPair],
], ],
add_special_tokens: bool = True, add_special_tokens: bool = True,
padding: Union[bool, str] = False, padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str] = False, truncation: Union[bool, str, TruncationStrategy] = False,
max_length: Optional[int] = None, max_length: Optional[int] = None,
stride: int = 0, stride: int = 0,
is_pretokenized: bool = False, is_pretokenized: bool = False,
...@@ -1794,17 +2063,16 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1794,17 +2063,16 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
**kwargs **kwargs
) -> BatchEncoding: ) -> BatchEncoding:
""" """
Returns a dictionary containing the encoded sequence or sequence pair and additional information: Tokenize and prepare for the model a list of sequences or a list of pairs of sequences.
the mask for sequence classification and the overflowing elements if a ``max_length`` is specified.
.. warning::
This method is deprecated, ``__call__`` should be used instead.
Args: Args:
batch_text_or_text_pairs (:obj:`List[str]`, :obj:`List[Tuple[str, str]]`, batch_text_or_text_pairs (:obj:`List[str]`, :obj:`List[Tuple[str, str]]`, :obj:`List[List[str]]`, :obj:`List[Tuple[List[str], List[str]]]`, and for not-fast tokenizers, also :obj:`List[List[int]]`, :obj:`List[Tuple[List[int], List[int]]]`):
:obj:`List[List[str]]`, :obj:`List[Tuple[List[str], List[str]]]`,
and for not-fast tokenizers, also:
:obj:`List[List[int]]`, :obj:`List[Tuple[List[int], List[int]]]`):
Batch of sequences or pair of sequences to be encoded. Batch of sequences or pair of sequences to be encoded.
This can be a list of string/string-sequences/int-sequences or a list of pair of This can be a list of string/string-sequences/int-sequences or a list of pair of
string/string-sequences/int-sequence (see details in encode_plus) string/string-sequences/int-sequence (see details in ``encode_plus``).
""" """
# Backward compatibility for 'truncation_strategy', 'pad_to_max_length' # Backward compatibility for 'truncation_strategy', 'pad_to_max_length'
...@@ -1875,39 +2143,56 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1875,39 +2143,56 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
Dict[str, List[EncodedInput]], Dict[str, List[EncodedInput]],
List[Dict[str, EncodedInput]], List[Dict[str, EncodedInput]],
], ],
padding: Union[bool, str] = True, padding: Union[bool, str, PaddingStrategy] = True,
max_length: Optional[int] = None, max_length: Optional[int] = None,
pad_to_multiple_of: Optional[int] = None, pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None, return_attention_mask: Optional[bool] = None,
return_tensors: Optional[Union[str, TensorType]] = None, return_tensors: Optional[Union[str, TensorType]] = None,
verbose: bool = True, verbose: bool = True,
) -> BatchEncoding: ) -> BatchEncoding:
""" Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length in the batch. """
Pad a single encoded input or a batch of encoded inputs up to predefined length or to the max sequence length
in the batch.
Padding side (left/right) padding token ids are defined at the tokenizer level Padding side (left/right) padding token ids are defined at the tokenizer level
(with ``self.padding_side``, ``self.pad_token_id`` and ``self.pad_token_type_id``) (with ``self.padding_side``, ``self.pad_token_id`` and ``self.pad_token_type_id``)
Args: Args:
encoded_inputs: Dictionary of tokenized inputs (`Dict[str, List[int]]`) or batch of tokenized inputs. encoded_inputs (:class:`~transformers.BatchEncoding`, list of :class:`~transformers.BatchEncoding`, :obj:`Dict[str, List[int]]`, :obj:`Dict[str, List[List[int]]` or :obj:`List[Dict[str, List[int]]]`):
Batch of tokenized inputs can be given as dicts of lists or lists of dicts, both work so you can Tokenized inputs. Can represent one input (:class:`~transformers.BatchEncoding` or
use ``tokenizer.pad()`` during pre-processing as well as in a PyTorch Dataloader collate function. :obj:`Dict[str, List[int]]`) or a batch of tokenized inputs (list of
(`Dict[str, List[List[int]]]` or `List[Dict[str, List[int]]]`). :class:`~transformers.BatchEncoding`, `Dict[str, List[List[int]]]` or `List[Dict[str, List[int]]]`) so
padding: Boolean or specific strategy to use for padding. you can use this method during preprocessing as well as in a PyTorch Dataloader collate function.
Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`False`):
- 'longest' (or `True`) Pad to the longest sequence in the batch Select a strategy to pad the returned sequences (according to the model's padding side and padding
- 'max_length': Pad to the max length (default) index) among:
- 'do_not_pad' (or `False`): Do not pad
max_length: maximum length of the returned list and optionally padding length (see below). * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a
Will truncate by taking into account the special tokens. single sequence if provided).
pad_to_multiple_of: (optional) Integer if set will pad the sequence to a multiple of the provided value. * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
This is especially useful to enable the use of Tensor Core on NVIDIA hardware with compute capability maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
max_length (:obj:`int`, `optional`):
Maximum length of the returned list and optionally padding length (see above).
pad_to_multiple_of (:obj:`int`, `optional`):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
>= 7.5 (Volta). >= 7.5 (Volta).
return_attention_mask: (optional) Set to False to avoid returning attention mask (default: set to model specifics) return_attention_mask (:obj:`bool`, `optional`):
return_tensors (:obj:`str`, `optional`, defaults to :obj:`None`): Whether to return the attention mask. If left to the default, will return the attention mask according
Can be set to 'tf', 'pt' or 'np' to return respectively TensorFlow :obj:`tf.constant`, to the specific tokenizer's default, defined by the :obj:`return_outputs` attribute.
PyTorch :obj:`torch.Tensor` or Numpy :obj: `np.ndarray` instead of a list of python integers.
`What are attention masks? <../glossary.html#attention-mask>`__
return_tensors (:obj:`str` or :class:`~transformers.tokenization_utils_base.TensorType`, `optional`):
If set, will return tensors instead of list of python integers. Acceptable values are:
* :obj:`'tf'`: Return TensorFlow :obj:`tf.constant` objects.
* :obj:`'pt'`: Return PyTorch :obj:`torch.Tensor` objects.
* :obj:`'np'`: Return Numpy :obj:`np.ndarray` objects.
verbose (:obj:`bool`, `optional`, defaults to :obj:`True`): verbose (:obj:`bool`, `optional`, defaults to :obj:`True`):
Set to ``False`` to avoid printing infos and warnings. Whether or not to print informations and warnings.
""" """
# If we have a list of dicts, let's convert it in a dict of lists # If we have a list of dicts, let's convert it in a dict of lists
if isinstance(encoded_inputs, (list, tuple)) and isinstance(encoded_inputs[0], (dict, BatchEncoding)): if isinstance(encoded_inputs, (list, tuple)) and isinstance(encoded_inputs[0], (dict, BatchEncoding)):
...@@ -1966,15 +2251,41 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1966,15 +2251,41 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
return BatchEncoding(batch_outputs, tensor_type=return_tensors) return BatchEncoding(batch_outputs, tensor_type=return_tensors)
def create_token_type_ids_from_sequences(self, token_ids_0: List, token_ids_1: Optional[List] = None) -> List[int]: def create_token_type_ids_from_sequences(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
"""
Create the token type IDs corresponding to the sequences passed.
`What are token type IDs? <../glossary.html#token-type-ids>`__
Should be overriden in a subclass if the model has a special way of building those.
Args:
token_ids_0 (:obj:`List[int]`): The first tokenized sequence.
token_ids_1 (:obj:`List[int]`, `optional`): The second tokenized sequence.
Returns:
:obj:`List[int]`: The token type ids.
"""
if token_ids_1 is None: if token_ids_1 is None:
return len(token_ids_0) * [0] return len(token_ids_0) * [0]
return [0] * len(token_ids_0) + [1] * len(token_ids_1) return [0] * len(token_ids_0) + [1] * len(token_ids_1)
def build_inputs_with_special_tokens(self, token_ids_0: List, token_ids_1: Optional[List] = None) -> List: def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
""" """
Build model inputs from a sequence or a pair of sequence for sequence classification tasks Build model inputs from a sequence or a pair of sequence for sequence classification tasks
by concatenating and adding special tokens. This implementation does not add special tokens. by concatenating and adding special tokens.
This implementation does not add special tokens and this method should be overriden in a subclass.
Args:
token_ids_0 (:obj:`List[int]`): The first tokenized sequence.
token_ids_1 (:obj:`List[int]`, `optional`): The second tokenized sequence.
Returns:
:obj:`List[int]`: The model input with special tokens.
""" """
if token_ids_1 is None: if token_ids_1 is None:
return token_ids_0 return token_ids_0
...@@ -1986,8 +2297,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -1986,8 +2297,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
ids: List[int], ids: List[int],
pair_ids: Optional[List[int]] = None, pair_ids: Optional[List[int]] = None,
add_special_tokens: bool = True, add_special_tokens: bool = True,
padding: Union[bool, str] = False, padding: Union[bool, str, PaddingStrategy] = False,
truncation: Union[bool, str] = False, truncation: Union[bool, str, TruncationStrategy] = False,
max_length: Optional[int] = None, max_length: Optional[int] = None,
stride: int = 0, stride: int = 0,
pad_to_multiple_of: Optional[int] = None, pad_to_multiple_of: Optional[int] = None,
...@@ -2002,15 +2313,18 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -2002,15 +2313,18 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
prepend_batch_axis: bool = False, prepend_batch_axis: bool = False,
**kwargs **kwargs
) -> BatchEncoding: ) -> BatchEncoding:
""" Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. """
Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model.
It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and
manages a moving window (with user defined stride) for overflowing tokens manages a moving window (with user defined stride) for overflowing tokens
Args: Args:
ids: list of tokenized input ids. Can be obtained from a string by chaining the ids (:obj:`List[int]`):
`tokenize` and `convert_tokens_to_ids` methods. Tokenized input ids of the first sequence. Can be obtained from a string by chaining the
pair_ids: Optional second list of input ids. Can be obtained from a string by chaining the ``tokenize`` and ``convert_tokens_to_ids`` methods.
`tokenize` and `convert_tokens_to_ids` methods. pair_ids (:obj:`List[int]`, `optional`):
Tokenized input ids of the second sequence. Can be obtained from a string by chaining the
``tokenize`` and ``convert_tokens_to_ids`` methods.
""" """
if "return_lengths" in kwargs: if "return_lengths" in kwargs:
...@@ -2113,27 +2427,46 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -2113,27 +2427,46 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
truncation_strategy: Union[str, TruncationStrategy] = "longest_first", truncation_strategy: Union[str, TruncationStrategy] = "longest_first",
stride: int = 0, stride: int = 0,
) -> Tuple[List[int], List[int], List[int]]: ) -> Tuple[List[int], List[int], List[int]]:
""" Truncates a sequence pair in place to the maximum length. """
Truncates a sequence pair in-place following the strategy.
Args: Args:
ids: list of tokenized input ids. Can be obtained from a string by chaining the ids (:obj:`List[int]`):
`tokenize` and `convert_tokens_to_ids` methods. Tokenized input ids of the first sequence. Can be obtained from a string by chaining the
pair_ids: Optional second list of input ids. Can be obtained from a string by chaining the ``tokenize`` and ``convert_tokens_to_ids`` methods.
`tokenize` and `convert_tokens_to_ids` methods. pair_ids (:obj:`List[int]`, `optional`):
num_tokens_to_remove (:obj:`int`, `optional`, defaults to ``0``): Tokenized input ids of the second sequence. Can be obtained from a string by chaining the
number of tokens to remove using the truncation strategy ``tokenize`` and ``convert_tokens_to_ids`` methods.
truncation_strategy (:obj:`string`, `optional`, defaults to "longest_first"): num_tokens_to_remove (:obj:`int`, `optional`, defaults to 0):
String selected in the following options: Number of tokens to remove using the truncation strategy.
truncation (:obj:`str` or :class:`~transformers.tokenization_utils_base.TruncationStrategy`, `optional`, defaults to :obj:`False`):
- 'longest_first' (default): Iteratively reduce the inputs sequence until the input is under max_length The strategy to follow for truncation. Can be:
starting from the longest one at each token (when there is a pair of input sequences).
Overflowing tokens only contains overflow from the first sequence. * :obj:`'longest_first'`: Truncate to a maximum length specified with the argument
- 'only_first': Only truncate the first sequence. raise an error if the first sequence is shorter or equal to than num_tokens_to_remove. :obj:`max_length` or to the maximum acceptable input length for the model if that argument is not
- 'only_second': Only truncate the second sequence provided. This will truncate token by token, removing a token from the longest sequence in the pair
- 'do_not_truncate' if a pair of sequences (or a batch of pairs) is provided.
stride (:obj:`int`, `optional`, defaults to ``0``): * :obj:`'only_first'`: Truncate to a maximum length specified with the argument :obj:`max_length` or to
If set to a number along with max_length, the overflowing tokens returned will contain some tokens the maximum acceptable input length for the model if that argument is not provided. This will only
truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
* :obj:`'only_second'`: Truncate to a maximum length specified with the argument :obj:`max_length` or
to the maximum acceptable input length for the model if that argument is not provided. This will only
truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided.
* :obj:`'do_not_truncate'` (default): No truncation (i.e., can output batch with
sequence lengths greater than the model maximum admissible input size).
max_length (:obj:`int`, `optional`):
Controls the maximum length to use by one of the truncation/padding parameters.
If left unset or set to :obj:`None`, this will use the predefined model maximum length if a maximum
length is required by one of the truncation/padding parameters. If the model has no specific maximum
input length (like XLNet) truncation/padding to a maximum length will be deactivated.
stride (:obj:`int`, `optional`, defaults to 0):
If set to a positive number, the overflowing tokens returned will contain some tokens
from the main sequence returned. The value of this argument defines the number of additional tokens. from the main sequence returned. The value of this argument defines the number of additional tokens.
Returns:
:obj:`Tuple[List[int], List[int], List[int]]`:
The truncated ``ids``, the truncated ``pair_ids`` and the list of overflowing tokens.
""" """
if num_tokens_to_remove <= 0: if num_tokens_to_remove <= 0:
return ids, pair_ids, [] return ids, pair_ids, []
...@@ -2193,7 +2526,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -2193,7 +2526,8 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
pad_to_multiple_of: Optional[int] = None, pad_to_multiple_of: Optional[int] = None,
return_attention_mask: Optional[bool] = None, return_attention_mask: Optional[bool] = None,
) -> dict: ) -> dict:
""" Pad encoded inputs (on left/right and up to predefined legnth or max length in the batch) """
Pad encoded inputs (on left/right and up to predefined legnth or max length in the batch)
Args: Args:
encoded_inputs: Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`). encoded_inputs: Dictionary of tokenized inputs (`List[int]`) or batch of tokenized inputs (`List[List[int]]`).
...@@ -2262,9 +2596,15 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -2262,9 +2596,15 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
Convert a list of lists of token ids into a list of strings by calling decode. Convert a list of lists of token ids into a list of strings by calling decode.
Args: Args:
token_ids: list of tokenized input ids. Can be obtained using the `encode` or `encode_plus` methods. sequences (:obj:`List[List[int]]`):
skip_special_tokens: if set to True, will replace special tokens. List of tokenized input ids. Can be obtained using the ``__call__`` method.
clean_up_tokenization_spaces: if set to True, will clean up the tokenization spaces. skip_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to clean up the tokenization spaces.
Returns:
:obj:`List[str]`: The list of decoded sentences.
""" """
return [ return [
self.decode( self.decode(
...@@ -2277,30 +2617,38 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -2277,30 +2617,38 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True
) -> str: ) -> str:
""" """
Converts a sequence of ids (integer) in a string, using the tokenizer and vocabulary Converts a sequence of ids in a string, using the tokenizer and vocabulary
with options to remove special tokens and clean up tokenization spaces. with options to remove special tokens and clean up tokenization spaces.
Similar to doing ``self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))``. Similar to doing ``self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))``.
Args: Args:
token_ids: list of tokenized input ids. Can be obtained using the `encode` or `encode_plus` methods. token_ids (:obj:`List[int]`):
skip_special_tokens: if set to True, will replace special tokens. List of tokenized input ids. Can be obtained using the ``__call__`` method.
clean_up_tokenization_spaces: if set to True, will clean up the tokenization spaces. skip_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to clean up the tokenization spaces.
Returns:
:obj:`str`: The decoded sentence.
""" """
raise NotImplementedError raise NotImplementedError
def get_special_tokens_mask( def get_special_tokens_mask(
self, token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]: ) -> List[int]:
""" """
Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer ``prepare_for_model`` or ``encode_plus`` methods. special tokens using the tokenizer ``prepare_for_model`` or ``encode_plus`` methods.
Args: Args:
token_ids_0: list of ids (must not contain special tokens) token_ids_0 (:obj:`List[int]`):
token_ids_1: Optional list of ids (must not contain special tokens), necessary when fetching sequence ids List of ids of the first sequence.
for sequence pairs token_ids_1 (:obj:`List[int]`, `optional`):
already_has_special_tokens: (default False) Set to True if the token list is already formated with List of ids of the second sequence.
special tokens for the model already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Wheter or not the token list is already formated with special tokens for the model.
Returns: Returns:
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
...@@ -2320,7 +2668,14 @@ class PreTrainedTokenizerBase(SpecialTokensMixin): ...@@ -2320,7 +2668,14 @@ class PreTrainedTokenizerBase(SpecialTokensMixin):
@staticmethod @staticmethod
def clean_up_tokenization(out_string: str) -> str: def clean_up_tokenization(out_string: str) -> str:
""" Clean up a list of simple English tokenization artifacts like spaces before punctuations and abreviated forms. """
Clean up a list of simple English tokenization artifacts like spaces before punctuations and abreviated forms.
Args:
out_string (:obj:`str`): The text to clean up.
Returns:
:obj:`str`: The cleaned-up string.
""" """
out_string = ( out_string = (
out_string.replace(" .", ".") out_string.replace(" .", ".")
......
...@@ -25,7 +25,9 @@ from tokenizers import Encoding as EncodingFast ...@@ -25,7 +25,9 @@ from tokenizers import Encoding as EncodingFast
from tokenizers.decoders import Decoder as DecoderFast from tokenizers.decoders import Decoder as DecoderFast
from tokenizers.implementations import BaseTokenizer as BaseTokenizerFast from tokenizers.implementations import BaseTokenizer as BaseTokenizerFast
from .file_utils import add_end_docstrings
from .tokenization_utils_base import ( from .tokenization_utils_base import (
INIT_TOKENIZER_DOCSTRING,
AddedToken, AddedToken,
BatchEncoding, BatchEncoding,
PaddingStrategy, PaddingStrategy,
...@@ -41,10 +43,17 @@ from .tokenization_utils_base import ( ...@@ -41,10 +43,17 @@ from .tokenization_utils_base import (
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@add_end_docstrings(
INIT_TOKENIZER_DOCSTRING,
"""
.. automethod:: __call__
""",
)
class PreTrainedTokenizerFast(PreTrainedTokenizerBase): class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
""" Base class for all fast tokenizers (wrapping HuggingFace tokenizers library). """
Base class for all fast tokenizers (wrapping HuggingFace tokenizers library).
Inherits from PreTrainedTokenizerBase. Inherits from :class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase`.
Handles all the shared methods for tokenization and special tokens, as well as methods for Handles all the shared methods for tokenization and special tokens, as well as methods for
downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary. downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary.
...@@ -52,54 +61,6 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase): ...@@ -52,54 +61,6 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
This class also contains the added tokens in a unified way on top of all tokenizers so we don't This class also contains the added tokens in a unified way on top of all tokenizers so we don't
have to handle the specific vocabulary augmentation methods of the various underlying have to handle the specific vocabulary augmentation methods of the various underlying
dictionary structures (BPE, sentencepiece...). dictionary structures (BPE, sentencepiece...).
Class attributes (overridden by derived classes):
- ``vocab_files_names``: a python ``dict`` with, as keys, the ``__init__`` keyword name of each vocabulary file
required by the model, and as associated values, the filename for saving the associated file (string).
- ``pretrained_vocab_files_map``: a python ``dict of dict`` the high-level keys
being the ``__init__`` keyword name of each vocabulary file required by the model, the low-level being the
`short-cut-names` (string) of the pretrained models with, as associated values, the `url` (string) to the
associated pretrained vocabulary file.
- ``max_model_input_sizes``: a python ``dict`` with, as keys, the `short-cut-names` (string) of the pretrained
models, and as associated values, the maximum length of the sequence inputs of this model, or None if the
model has no maximum input size.
- ``pretrained_init_configuration``: a python ``dict`` with, as keys, the `short-cut-names` (string) of the
pretrained models, and as associated values, a dictionnary of specific arguments to pass to the
``__init__``method of the tokenizer class for this pretrained model when loading the tokenizer with the
``from_pretrained()`` method.
Args:
- ``tokenizer`` (`BaseTokenizerFast`): A Fast tokenizer from the HuggingFace tokenizer library (in low level Rust language)
- ``model_max_length``: (`Optional`) int: the maximum length in number of tokens for the inputs to the transformer model.
When the tokenizer is loaded with `from_pretrained`, this will be set to the value stored for the associated
model in ``max_model_input_sizes`` (see above). If no value is provided, will default to VERY_LARGE_INTEGER (`int(1e30)`).
no associated max_length can be found in ``max_model_input_sizes``.
- ``padding_side``: (`Optional`) string: the side on which the model should have padding applied.
Should be selected between ['right', 'left']
- ``model_input_names``: (`Optional`) List[string]: the list of the forward pass inputs accepted by the
model ("token_type_ids", "attention_mask"...).
- ``bos_token``: (`Optional`) string: a beginning of sentence token.
Will be associated to ``self.bos_token`` and ``self.bos_token_id``
- ``eos_token``: (`Optional`) string: an end of sentence token.
Will be associated to ``self.eos_token`` and ``self.eos_token_id``
- ``unk_token``: (`Optional`) string: an unknown token.
Will be associated to ``self.unk_token`` and ``self.unk_token_id``
- ``sep_token``: (`Optional`) string: a separation token (e.g. to separate context and query in an input sequence).
Will be associated to ``self.sep_token`` and ``self.sep_token_id``
- ``pad_token``: (`Optional`) string: a padding token.
Will be associated to ``self.pad_token`` and ``self.pad_token_id``
- ``cls_token``: (`Optional`) string: a classification token (e.g. to extract a summary of an input sequence
leveraging self-attention along the full depth of the model).
Will be associated to ``self.cls_token`` and ``self.cls_token_id``
- ``mask_token``: (`Optional`) string: a masking token (e.g. when training a model with masked-language
modeling). Will be associated to ``self.mask_token`` and ``self.mask_token_id``
- ``additional_special_tokens``: (`Optional`) list: a list of additional special tokens.
Adding all special tokens here to ensure they won't be split by the tokenization process.
Will be associated to ``self.additional_special_tokens`` and ``self.additional_special_tokens_ids``
.. automethod:: __call__
""" """
def __init__(self, tokenizer: BaseTokenizerFast, **kwargs): def __init__(self, tokenizer: BaseTokenizerFast, **kwargs):
...@@ -118,26 +79,53 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase): ...@@ -118,26 +79,53 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
@property @property
def vocab_size(self) -> int: def vocab_size(self) -> int:
"""
:obj:`int`: Size of the base vocabulary (without the added tokens).
"""
return self._tokenizer.get_vocab_size(with_added_tokens=False) return self._tokenizer.get_vocab_size(with_added_tokens=False)
def get_vocab(self) -> Dict[str, int]: def get_vocab(self) -> Dict[str, int]:
"""
Returns the vocabulary as a dictionary of token to index.
:obj:`tokenizer.get_vocab()[token]` is equivalent to :obj:`tokenizer.convert_tokens_to_ids(token)` when
:obj:`token` is in the vocab.
Returns:
:obj:`Dict[str, int]`: The vocabulary.
"""
return self._tokenizer.get_vocab(with_added_tokens=True) return self._tokenizer.get_vocab(with_added_tokens=True)
def get_added_vocab(self) -> Dict[str, int]: def get_added_vocab(self) -> Dict[str, int]:
"""
Returns the added tokens in the vocabulary as a dictionary of token to index.
Returns:
:obj:`Dict[str, int]`: The added tokens.
"""
base_vocab = self._tokenizer.get_vocab(with_added_tokens=False) base_vocab = self._tokenizer.get_vocab(with_added_tokens=False)
full_vocab = self._tokenizer.get_vocab(with_added_tokens=True) full_vocab = self._tokenizer.get_vocab(with_added_tokens=True)
added_vocab = dict((tok, index) for tok, index in full_vocab.items() if tok not in base_vocab) added_vocab = dict((tok, index) for tok, index in full_vocab.items() if tok not in base_vocab)
return added_vocab return added_vocab
def __len__(self) -> int: def __len__(self) -> int:
"""
Size of the full vocabulary with the added tokens.
"""
return self._tokenizer.get_vocab_size(with_added_tokens=True) return self._tokenizer.get_vocab_size(with_added_tokens=True)
@property @property
def backend_tokenizer(self) -> BaseTokenizerFast: def backend_tokenizer(self) -> BaseTokenizerFast:
"""
:obj:`tokenizers.implementations.BaseTokenizer`: The Rust tokenizer used as a backend.
"""
return self._tokenizer return self._tokenizer
@property @property
def decoder(self) -> DecoderFast: def decoder(self) -> DecoderFast:
"""
:obj:`tokenizers.decoders.Decoder`: The Rust decoder for this tokenizer.
"""
return self._tokenizer._tokenizer.decoder return self._tokenizer._tokenizer.decoder
def _convert_encoding( def _convert_encoding(
...@@ -186,8 +174,15 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase): ...@@ -186,8 +174,15 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
return encoding_dict return encoding_dict
def convert_tokens_to_ids(self, tokens: Union[str, List[str]]) -> Union[int, List[int]]: def convert_tokens_to_ids(self, tokens: Union[str, List[str]]) -> Union[int, List[int]]:
""" Converts a token string (or a sequence of tokens) in a single integer id """
(or a sequence of ids), using the vocabulary. Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the
vocabulary.
Args:
token (:obj:`str` or :obj:`List[str]`): One or several token(s) to convert to token id(s).
Returns:
:obj:`int` or :obj:`List[int]`: The token id or list of token ids.
""" """
if tokens is None: if tokens is None:
return None return None
...@@ -216,16 +211,38 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase): ...@@ -216,16 +211,38 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
return self._tokenizer.add_tokens(new_tokens) return self._tokenizer.add_tokens(new_tokens)
def num_special_tokens_to_add(self, pair: bool = False) -> int: def num_special_tokens_to_add(self, pair: bool = False) -> int:
"""
Returns the number of added tokens when encoding a sequence with special tokens.
.. note::
This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not
put this inside your training loop.
Args:
pair (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether the number of added tokens should be computed in the case of a sequence pair or a single
sequence.
Returns:
:obj:`int`: Number of special tokens added to sequences.
"""
return self._tokenizer.num_special_tokens_to_add(pair) return self._tokenizer.num_special_tokens_to_add(pair)
def convert_ids_to_tokens( def convert_ids_to_tokens(
self, ids: Union[int, List[int]], skip_special_tokens: bool = False self, ids: Union[int, List[int]], skip_special_tokens: bool = False
) -> Union[str, List[str]]: ) -> Union[str, List[str]]:
""" Converts a single index or a sequence of indices (integers) in a token " """
(resp.) a sequence of tokens (str), using the vocabulary and added tokens. Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary
and added tokens.
Args: Args:
skip_special_tokens: Don't decode special tokens (self.all_special_tokens). Default: False ids (:obj:`int` or :obj:`List[int]`):
The token id (or token ids) to convert to tokens.
skip_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to remove special tokens in the decoding.
Returns:
:obj:`str` or :obj:`List[str]`: The decoded token(s).
""" """
if isinstance(ids, int): if isinstance(ids, int):
return self._tokenizer.id_to_token(ids) return self._tokenizer.id_to_token(ids)
...@@ -238,6 +255,20 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase): ...@@ -238,6 +255,20 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
return tokens return tokens
def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False) -> List[str]: def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False) -> List[str]:
"""
Converts a string in a sequence of tokens, using the backend Rust tokenizer.
Args:
text (:obj:`str`):
The sequence to be encoded.
pair (:obj:`str`, `optional`):
A second sequence to be encoded with the first.
add_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to add the special tokens associated with the corresponding model.
Returns:
:obj:`List[str]`: The list of tokens.
"""
return self._tokenizer.encode(text, pair, add_special_tokens=add_special_tokens).tokens return self._tokenizer.encode(text, pair, add_special_tokens=add_special_tokens).tokens
def set_truncation_and_padding( def set_truncation_and_padding(
...@@ -248,20 +279,26 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase): ...@@ -248,20 +279,26 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
stride: int, stride: int,
pad_to_multiple_of: Optional[int], pad_to_multiple_of: Optional[int],
): ):
""" Define the truncation and the padding strategies for fast tokenizers """
(provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards. Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers
library) and restore the tokenizer settings afterwards.
The provided tokenizer has no padding / truncation strategy The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a
before the managed section. If your tokenizer set a padding / truncation strategy before, padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed
then it will be reset to no padding/truncation when exiting the managed section. section.
Args: Args:
padding_strategy (:obj:`PaddingStrategy`): The kind of padding that will be applied to the input padding_strategy (:class:`~transformers.tokenization_utils_base.PaddingStrategy`):
truncation_strategy (:obj:`TruncationStrategy`): The kind of truncation that will be applied to the input The kind of padding that will be applied to the input
max_length (:obj:`int`): The maximum size of the sequence truncation_strategy (:class:`~transformers.tokenization_utils_base.TruncationStrategy`):
stride (:obj:`int`): The stride to use when handling overflow The kind of truncation that will be applied to the input
pad_to_multiple_of (:obj:`int`, `optional`, defaults to `None`) max_length (:obj:`int`):
The maximum size of a sequence.
stride (:obj:`int`):
The stride to use when handling overflow.
pad_to_multiple_of (:obj:`int`, `optional`):
If set will pad the sequence to a multiple of the provided value. This is especially useful to enable
the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta).
""" """
# Set truncation and padding on the backend tokenizer # Set truncation and padding on the backend tokenizer
if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE: if truncation_strategy != TruncationStrategy.DO_NOT_TRUNCATE:
...@@ -436,6 +473,23 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase): ...@@ -436,6 +473,23 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
def decode( def decode(
self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True
) -> str: ) -> str:
"""
Converts a sequence of ids in a string, using the tokenizer and vocabulary
with options to remove special tokens and clean up tokenization spaces.
Similar to doing ``self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))``.
Args:
token_ids (:obj:`List[int]`):
List of tokenized input ids. Can be obtained using the ``__call__`` method.
skip_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to remove special tokens in the decoding.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to clean up the tokenization spaces.
Returns:
:obj:`str`: The decoded sentence.
"""
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
if clean_up_tokenization_spaces: if clean_up_tokenization_spaces:
...@@ -445,6 +499,20 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase): ...@@ -445,6 +499,20 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
return text return text
def save_vocabulary(self, save_directory: str) -> Tuple[str]: def save_vocabulary(self, save_directory: str) -> Tuple[str]:
"""
Save the tokenizer vocabulary to a directory. This method does *NOT* save added tokens
and special token mappings.
.. warning::
Please use :meth:`~transformers.PreTrainedTokenizer.save_pretrained` to save the full tokenizer state if
you want to reload it using the :meth:`~transformers.PreTrainedTokenizer.from_pretrained` class method.
Args:
save_directory (:obj:`str`): The path to adirectory where the tokenizer will be saved.
Returns:
A tuple of :obj:`str`: The files saved.
"""
if os.path.isdir(save_directory): if os.path.isdir(save_directory):
files = self._tokenizer.save_model(save_directory) files = self._tokenizer.save_model(save_directory)
else: else:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment