Unverified Commit 011cc0be authored by Sylvain Gugger's avatar Sylvain Gugger Committed by GitHub
Browse files

Fix all sphynx warnings (#5068)

parent af497b56
...@@ -530,6 +530,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin): ...@@ -530,6 +530,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin):
config: (`optional`) one of: config: (`optional`) one of:
- an instance of a class derived from :class:`~transformers.PretrainedConfig`, or - an instance of a class derived from :class:`~transformers.PretrainedConfig`, or
- a string valid as input to :func:`~transformers.PretrainedConfig.from_pretrained()` - a string valid as input to :func:`~transformers.PretrainedConfig.from_pretrained()`
Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when: Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
- the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or - the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or
- the model was saved using :func:`~transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory. - the model was saved using :func:`~transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory.
......
...@@ -323,6 +323,7 @@ class Pipeline(_ScikitCompat): ...@@ -323,6 +323,7 @@ class Pipeline(_ScikitCompat):
Base class implementing pipelined operations. Base class implementing pipelined operations.
Pipeline workflow is defined as a sequence of the following operations: Pipeline workflow is defined as a sequence of the following operations:
Input -> Tokenization -> Model Inference -> Post-Processing (Task dependent) -> Output Input -> Tokenization -> Model Inference -> Post-Processing (Task dependent) -> Output
Pipeline supports running on CPU or GPU through the device argument. Users can specify Pipeline supports running on CPU or GPU through the device argument. Users can specify
......
...@@ -103,6 +103,7 @@ class AutoTokenizer: ...@@ -103,6 +103,7 @@ class AutoTokenizer:
The `from_pretrained()` method takes care of returning the correct tokenizer class instance The `from_pretrained()` method takes care of returning the correct tokenizer class instance
based on the `model_type` property of the config object, or when it's missing, based on the `model_type` property of the config object, or when it's missing,
falling back to using pattern matching on the `pretrained_model_name_or_path` string: falling back to using pattern matching on the `pretrained_model_name_or_path` string:
- `t5`: T5Tokenizer (T5 model) - `t5`: T5Tokenizer (T5 model)
- `distilbert`: DistilBertTokenizer (DistilBert model) - `distilbert`: DistilBertTokenizer (DistilBert model)
- `albert`: AlbertTokenizer (ALBERT model) - `albert`: AlbertTokenizer (ALBERT model)
...@@ -136,6 +137,7 @@ class AutoTokenizer: ...@@ -136,6 +137,7 @@ class AutoTokenizer:
The tokenizer class to instantiate is selected The tokenizer class to instantiate is selected
based on the `model_type` property of the config object, or when it's missing, based on the `model_type` property of the config object, or when it's missing,
falling back to using pattern matching on the `pretrained_model_name_or_path` string: falling back to using pattern matching on the `pretrained_model_name_or_path` string:
- `t5`: T5Tokenizer (T5 model) - `t5`: T5Tokenizer (T5 model)
- `distilbert`: DistilBertTokenizer (DistilBert model) - `distilbert`: DistilBertTokenizer (DistilBert model)
- `albert`: AlbertTokenizer (ALBERT model) - `albert`: AlbertTokenizer (ALBERT model)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment