"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "24ed0b9346079da741b952c21966fdc2063292e4"
Unverified Commit fdf893c4 authored by Santiago Castro's avatar Santiago Castro Committed by GitHub
Browse files

Fix typo: indinces -> indices (#8159)

* Fix typo: indinces -> indices

* Fix some more

* Fix some more

* Fix some more

* Fix CI
parent c83cec44
...@@ -122,7 +122,7 @@ def get_default_model(targeted_task: Dict, framework: Optional[str], task_option ...@@ -122,7 +122,7 @@ def get_default_model(targeted_task: Dict, framework: Optional[str], task_option
Args: Args:
targeted_task (:obj:`Dict` ): targeted_task (:obj:`Dict` ):
Dictionnary representing the given task, that should contain default models Dictionary representing the given task, that should contain default models
framework (:obj:`str`, None) framework (:obj:`str`, None)
"pt", "tf" or None, representing a specific framework if it was specified, or None if we don't know yet. "pt", "tf" or None, representing a specific framework if it was specified, or None if we don't know yet.
...@@ -150,9 +150,7 @@ def get_default_model(targeted_task: Dict, framework: Optional[str], task_option ...@@ -150,9 +150,7 @@ def get_default_model(targeted_task: Dict, framework: Optional[str], task_option
else: else:
# XXX This error message needs to be updated to be more generic if more tasks are going to become # XXX This error message needs to be updated to be more generic if more tasks are going to become
# parametrized # parametrized
raise ValueError( raise ValueError('The task defaults can\'t be correctly selected. You probably meant "translation_XX_to_YY"')
'The task defaults can\'t be correctly selectionned. You probably meant "translation_XX_to_YY"'
)
if framework is None: if framework is None:
framework = "pt" framework = "pt"
...@@ -695,7 +693,7 @@ class Pipeline(_ScikitCompat): ...@@ -695,7 +693,7 @@ class Pipeline(_ScikitCompat):
Internal framework specific forward dispatching Internal framework specific forward dispatching
Args: Args:
inputs: dict holding all the keyworded arguments for required by the model forward method. inputs: dict holding all the keyword arguments for required by the model forward method.
return_tensors: Whether to return native framework (pt/tf) tensors rather than numpy array return_tensors: Whether to return native framework (pt/tf) tensors rather than numpy array
Returns: Returns:
...@@ -874,7 +872,7 @@ class TextGenerationPipeline(Pipeline): ...@@ -874,7 +872,7 @@ class TextGenerationPipeline(Pipeline):
args (:obj:`str` or :obj:`List[str]`): args (:obj:`str` or :obj:`List[str]`):
One or several prompts (or one list of prompts) to complete. One or several prompts (or one list of prompts) to complete.
return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`): return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to include the tensors of predictions (as token indinces) in the outputs. Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (:obj:`bool`, `optional`, defaults to :obj:`True`): return_text (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to include the decoded texts in the outputs. Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`): clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
...@@ -1710,7 +1708,7 @@ class QuestionAnsweringPipeline(Pipeline): ...@@ -1710,7 +1708,7 @@ class QuestionAnsweringPipeline(Pipeline):
question (:obj:`str` or :obj:`List[str]`): question (:obj:`str` or :obj:`List[str]`):
One or several question(s) (must be used in conjunction with the :obj:`context` argument). One or several question(s) (must be used in conjunction with the :obj:`context` argument).
context (:obj:`str` or :obj:`List[str]`): context (:obj:`str` or :obj:`List[str]`):
One or several context(s) associated with the qustion(s) (must be used in conjunction with the One or several context(s) associated with the question(s) (must be used in conjunction with the
:obj:`question` argument). :obj:`question` argument).
topk (:obj:`int`, `optional`, defaults to 1): topk (:obj:`int`, `optional`, defaults to 1):
The number of answers to return (will be chosen by order of likelihood). The number of answers to return (will be chosen by order of likelihood).
...@@ -1959,7 +1957,7 @@ class SummarizationPipeline(Pipeline): ...@@ -1959,7 +1957,7 @@ class SummarizationPipeline(Pipeline):
return_text (:obj:`bool`, `optional`, defaults to :obj:`True`): return_text (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to include the decoded texts in the outputs Whether or not to include the decoded texts in the outputs
return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`): return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to include the tensors of predictions (as token indinces) in the outputs. Whether or not to include the tensors of predictions (as token indices) in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`): clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to clean up the potential extra spaces in the text output. Whether or not to clean up the potential extra spaces in the text output.
generate_kwargs: generate_kwargs:
...@@ -2077,7 +2075,7 @@ class TranslationPipeline(Pipeline): ...@@ -2077,7 +2075,7 @@ class TranslationPipeline(Pipeline):
args (:obj:`str` or :obj:`List[str]`): args (:obj:`str` or :obj:`List[str]`):
Texts to be translated. Texts to be translated.
return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`): return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to include the tensors of predictions (as token indinces) in the outputs. Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (:obj:`bool`, `optional`, defaults to :obj:`True`): return_text (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to include the decoded texts in the outputs. Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`): clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
...@@ -2188,7 +2186,7 @@ class Text2TextGenerationPipeline(Pipeline): ...@@ -2188,7 +2186,7 @@ class Text2TextGenerationPipeline(Pipeline):
args (:obj:`str` or :obj:`List[str]`): args (:obj:`str` or :obj:`List[str]`):
Input text for the encoder. Input text for the encoder.
return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`): return_tensors (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to include the tensors of predictions (as token indinces) in the outputs. Whether or not to include the tensors of predictions (as token indices) in the outputs.
return_text (:obj:`bool`, `optional`, defaults to :obj:`True`): return_text (:obj:`bool`, `optional`, defaults to :obj:`True`):
Whether or not to include the decoded texts in the outputs. Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`): clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
...@@ -2253,8 +2251,8 @@ class Conversation: ...@@ -2253,8 +2251,8 @@ class Conversation:
:class:`~transformers.ConversationalPipeline`. The conversation contains a number of utility function to manage the :class:`~transformers.ConversationalPipeline`. The conversation contains a number of utility function to manage the
addition of new user input and generated model responses. A conversation needs to contain an unprocessed user input addition of new user input and generated model responses. A conversation needs to contain an unprocessed user input
before being passed to the :class:`~transformers.ConversationalPipeline`. This user input is either created when before being passed to the :class:`~transformers.ConversationalPipeline`. This user input is either created when
the class is instantiated, or by calling :obj:`conversional_pipeline.append_response("input")` after a conversation the class is instantiated, or by calling :obj:`conversational_pipeline.append_response("input")` after a
turn. conversation turn.
Arguments: Arguments:
text (:obj:`str`, `optional`): text (:obj:`str`, `optional`):
...@@ -2671,8 +2669,8 @@ def check_task(task: str) -> Tuple[Dict, Any]: ...@@ -2671,8 +2669,8 @@ def check_task(task: str) -> Tuple[Dict, Any]:
- :obj:`"conversational"` - :obj:`"conversational"`
Returns: Returns:
(task_defaults:obj:`dict`, task_options: (:obj:`tuple`, None)) The actual dictionnary required to initialize (task_defaults:obj:`dict`, task_options: (:obj:`tuple`, None)) The actual dictionary required to initialize the
the pipeline and some extra task options for parametrized tasks like "translation_XX_to_YY" pipeline and some extra task options for parametrized tasks like "translation_XX_to_YY"
""" """
......
...@@ -89,7 +89,7 @@ class T5Tokenizer(PreTrainedTokenizer): ...@@ -89,7 +89,7 @@ class T5Tokenizer(PreTrainedTokenizer):
extra_ids (:obj:`int`, `optional`, defaults to 100): extra_ids (:obj:`int`, `optional`, defaults to 100):
Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are
accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are
indexed from the end of the vocabulary up to beginnning ("<extra_id_0>" is the last token in the vocabulary indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary
like in T5 preprocessing see `here like in T5 preprocessing see `here
<https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117>`__). <https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117>`__).
additional_special_tokens (:obj:`List[str]`, `optional`): additional_special_tokens (:obj:`List[str]`, `optional`):
......
...@@ -100,7 +100,7 @@ class T5TokenizerFast(PreTrainedTokenizerFast): ...@@ -100,7 +100,7 @@ class T5TokenizerFast(PreTrainedTokenizerFast):
extra_ids (:obj:`int`, `optional`, defaults to 100): extra_ids (:obj:`int`, `optional`, defaults to 100):
Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are
accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are accessible as "<extra_id_{%d}>" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are
indexed from the end of the vocabulary up to beginnning ("<extra_id_0>" is the last token in the vocabulary indexed from the end of the vocabulary up to beginning ("<extra_id_0>" is the last token in the vocabulary
like in T5 preprocessing see `here like in T5 preprocessing see `here
<https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117>`__). <https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117>`__).
additional_special_tokens (:obj:`List[str]`, `optional`): additional_special_tokens (:obj:`List[str]`, `optional`):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment