Unverified Commit ce111fee authored by Stas Bekman's avatar Stas Bekman Committed by GitHub
Browse files

[doc] fix broken ref (#12597)

parent f0dde601
...@@ -56,7 +56,7 @@ def load_vocab(vocab_file): ...@@ -56,7 +56,7 @@ def load_vocab(vocab_file):
class XLMProphetNetTokenizer(PreTrainedTokenizer): class XLMProphetNetTokenizer(PreTrainedTokenizer):
""" """
Adapted from :class:`~transformers.RobertaTokenizer` and class:`~transformers.XLNetTokenizer`. Based on Adapted from :class:`~transformers.RobertaTokenizer` and :class:`~transformers.XLNetTokenizer`. Based on
`SentencePiece <https://github.com/google/sentencepiece>`__. `SentencePiece <https://github.com/google/sentencepiece>`__.
This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the main methods. This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the main methods.
......
...@@ -54,7 +54,7 @@ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { ...@@ -54,7 +54,7 @@ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
class XLMRobertaTokenizer(PreTrainedTokenizer): class XLMRobertaTokenizer(PreTrainedTokenizer):
""" """
Adapted from :class:`~transformers.RobertaTokenizer` and class:`~transformers.XLNetTokenizer`. Based on Adapted from :class:`~transformers.RobertaTokenizer` and :class:`~transformers.XLNetTokenizer`. Based on
`SentencePiece <https://github.com/google/sentencepiece>`__. `SentencePiece <https://github.com/google/sentencepiece>`__.
This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the main methods. This tokenizer inherits from :class:`~transformers.PreTrainedTokenizer` which contains most of the main methods.
......
...@@ -67,7 +67,7 @@ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { ...@@ -67,7 +67,7 @@ PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
class XLMRobertaTokenizerFast(PreTrainedTokenizerFast): class XLMRobertaTokenizerFast(PreTrainedTokenizerFast):
""" """
Construct a "fast" XLM-RoBERTa tokenizer (backed by HuggingFace's `tokenizers` library). Adapted from Construct a "fast" XLM-RoBERTa tokenizer (backed by HuggingFace's `tokenizers` library). Adapted from
:class:`~transformers.RobertaTokenizer` and class:`~transformers.XLNetTokenizer`. Based on `BPE :class:`~transformers.RobertaTokenizer` and :class:`~transformers.XLNetTokenizer`. Based on `BPE
<https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models>`__. <https://huggingface.co/docs/tokenizers/python/latest/components.html?highlight=BPE#models>`__.
This tokenizer inherits from :class:`~transformers.PreTrainedTokenizerFast` which contains most of the main This tokenizer inherits from :class:`~transformers.PreTrainedTokenizerFast` which contains most of the main
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment