Commit 447afe9c authored by thomwolf's avatar thomwolf
Browse files

updating docstring for AutoModel

parent 84a3a968
...@@ -31,7 +31,7 @@ from .modeling_xlnet import XLNetConfig, XLNetModel, XLNetLMHeadModel, XLNetForS ...@@ -31,7 +31,7 @@ from .modeling_xlnet import XLNetConfig, XLNetModel, XLNetLMHeadModel, XLNetForS
from .modeling_xlm import XLMConfig, XLMModel, XLMWithLMHeadModel, XLMForSequenceClassification, XLMForQuestionAnswering from .modeling_xlm import XLMConfig, XLMModel, XLMWithLMHeadModel, XLMForSequenceClassification, XLMForQuestionAnswering
from .modeling_roberta import RobertaConfig, RobertaModel, RobertaForMaskedLM, RobertaForSequenceClassification from .modeling_roberta import RobertaConfig, RobertaModel, RobertaForMaskedLM, RobertaForSequenceClassification
from .modeling_utils import PreTrainedModel, SequenceSummary from .modeling_utils import PreTrainedModel, SequenceSummary, add_start_docstrings
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
...@@ -76,26 +76,32 @@ class AutoConfig(object): ...@@ -76,26 +76,32 @@ class AutoConfig(object):
- contains `roberta`: RobertaConfig (RoBERTa model) - contains `roberta`: RobertaConfig (RoBERTa model)
Params: Params:
**pretrained_model_name_or_path**: either: pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a pre-trained model configuration to load from cache
or download and cache if not already stored in cache (e.g. 'bert-base-uncased'). - a string with the `shortcut name` of a pre-trained model configuration to load from cache or download, e.g.: ``bert-base-uncased``.
- a path to a `directory` containing a configuration file saved - a path to a `directory` containing a configuration file saved using the :func:`~pytorch_transformers.PretrainedConfig.save_pretrained` method, e.g.: ``./my_model_directory/``.
using the `save_pretrained(save_directory)` method. - a path or url to a saved configuration JSON `file`, e.g.: ``./my_model_directory/configuration.json``.
- a path or url to a saved configuration `file`.
**cache_dir**: (`optional`) string: cache_dir: (`optional`) string:
Path to a directory in which a downloaded pre-trained model Path to a directory in which a downloaded pre-trained model
configuration should be cached if the standard cache should not be used. configuration should be cached if the standard cache should not be used.
**return_unused_kwargs**: (`optional`) bool:
kwargs: (`optional`) dict: key/value pairs with which to update the configuration object after loading.
- The values in kwargs of any keys which are configuration attributes will be used to override the loaded values.
- Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled by the `return_unused_kwargs` keyword parameter.
force_download: (`optional`) boolean, default False:
Force to (re-)download the model weights and configuration files and override the cached versions if they exists.
proxies: (`optional`) dict, default None:
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.
return_unused_kwargs: (`optional`) bool:
- If False, then this function returns just the final configuration object. - If False, then this function returns just the final configuration object.
- If True, then this functions returns a tuple `(config, unused_kwargs)` where `unused_kwargs` - If True, then this functions returns a tuple `(config, unused_kwargs)` where `unused_kwargs` is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: ie the part of kwargs which has not been used to update `config` and is otherwise ignored.
is a dictionary consisting of the key/value pairs whose keys are not configuration attributes:
ie the part of kwargs which has not been used to update `config` and is otherwise ignored.
**kwargs**: (`optional`) dict:
Dictionary of key/value pairs with which to update the configuration object after loading.
- The values in kwargs of any keys which are configuration attributes will be used
to override the loaded values.
- Behavior concerning key/value pairs whose keys are *not* configuration attributes is controlled
by the `return_unused_kwargs` keyword parameter.
Examples:: Examples::
...@@ -161,7 +167,7 @@ class AutoModel(object): ...@@ -161,7 +167,7 @@ class AutoModel(object):
r""" Instantiates one of the base model classes of the library r""" Instantiates one of the base model classes of the library
from a pre-trained model configuration. from a pre-trained model configuration.
The base model class to instantiate is selected as the first pattern matching The model class to instantiate is selected as the first pattern matching
in the `pretrained_model_name_or_path` string (in the following order): in the `pretrained_model_name_or_path` string (in the following order):
- contains `roberta`: RobertaModel (RoBERTa model) - contains `roberta`: RobertaModel (RoBERTa model)
- contains `bert`: BertModel (Bert model) - contains `bert`: BertModel (Bert model)
...@@ -175,44 +181,46 @@ class AutoModel(object): ...@@ -175,44 +181,46 @@ class AutoModel(object):
To train the model, you should first set it back in training mode with `model.train()` To train the model, you should first set it back in training mode with `model.train()`
Params: Params:
**pretrained_model_name_or_path**: either: pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a pre-trained model to load from cache
or download and cache if not already stored in cache (e.g. 'bert-base-uncased'). - a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
- a path to a `directory` containing a configuration file saved - a path to a `directory` containing model weights saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``.
using the `save_pretrained(save_directory)` method. - a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- a path or url to a tensorflow index checkpoint `file` (e.g. `./tf_model/model.ckpt.index`).
In this case, ``from_tf`` should be set to True and a configuration object should be model_args: (`optional`) Sequence of positional arguments:
provided as `config` argument. This loading option is slower than converting the TensorFlow All remaning positional arguments will be passed to the underlying model's ``__init__`` method
checkpoint in a PyTorch model using the provided conversion scripts and loading
the PyTorch model afterwards. config: (`optional`) instance of a class derived from :class:`~pytorch_transformers.PretrainedConfig`:
**model_args**: (`optional`) Sequence: Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
All remaining positional arguments will be passed to the underlying model's __init__ function
**config**: an optional configuration for the model to use instead of an automatically loaded configuration. - the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or
Configuration can be automatically loaded when: - the model was saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory.
- the model is a model provided by the library (loaded with a `shortcut name` of a pre-trained model), or - the model is loaded by suppling a local directory as ``pretrained_model_name_or_path`` and a configuration JSON file named `config.json` is found in the directory.
- the model was saved using the `save_pretrained(save_directory)` (loaded by supplying the save directory).
**state_dict**: an optional state dictionary for the model to use instead of a state dictionary loaded state_dict: (`optional`) dict:
from saved weights file. an optional state dictionnary for the model to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. This option can be used if you want to create a model from a pretrained configuration but load your own weights.
In this case though, you should check if using `save_pretrained(dir)` and `from_pretrained(save_directory)` is not In this case though, you should check if using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and :func:`~pytorch_transformers.PreTrainedModel.from_pretrained` is not a simpler option.
a simpler option.
**cache_dir**: (`optional`) string: cache_dir: (`optional`) string:
Path to a directory in which a downloaded pre-trained model Path to a directory in which a downloaded pre-trained model
configuration should be cached if the standard cache should not be used. configuration should be cached if the standard cache should not be used.
**output_loading_info**: (`optional`) boolean:
Set to ``True`` to also return a dictionary containing missing keys, unexpected keys and error messages. force_download: (`optional`) boolean, default False:
**kwargs**: (`optional`) dict: Force to (re-)download the model weights and configuration files and override the cached versions if they exists.
Dictionary of key, values to update the configuration object after loading.
Can be used to override selected configuration parameters. E.g. ``output_attention=True``. proxies: (`optional`) dict, default None:
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
- If a configuration is provided with `config`, **kwargs will be directly passed The proxies are used on each request.
to the underlying model's __init__ method.
- If a configuration is not provided, **kwargs will be first passed to the pretrained output_loading_info: (`optional`) boolean:
model configuration class loading function (`PretrainedConfig.from_pretrained`). Set to ``True`` to also return a dictionnary containing missing keys, unexpected keys and error messages.
Each key of **kwargs that corresponds to a configuration attribute
will be used to override said attribute with the supplied **kwargs value. kwargs: (`optional`) Remaining dictionary of keyword arguments:
Remaining keys that do not correspond to any configuration attribute will Can be used to update the configuration object (after it being loaded) and initiate the model. (e.g. ``output_attention=True``). Behave differently depending on whether a `config` is provided or automatically loaded:
be passed to the underlying model's __init__ function.
- If a configuration is provided with ``config``, ``**kwargs`` will be directly passed to the underlying model's ``__init__`` method (we assume all relevant updates to the configuration have already been done)
- If a configuration is not provided, ``kwargs`` will be first passed to the configuration class initialization function (:func:`~pytorch_transformers.PretrainedConfig.from_pretrained`). Each key of ``kwargs`` that corresponds to a configuration attribute will be used to override said attribute with the supplied ``kwargs`` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's ``__init__`` function.
Examples:: Examples::
...@@ -294,44 +302,46 @@ class AutoModelWithLMHead(object): ...@@ -294,44 +302,46 @@ class AutoModelWithLMHead(object):
To train the model, you should first set it back in training mode with `model.train()` To train the model, you should first set it back in training mode with `model.train()`
Params: Params:
**pretrained_model_name_or_path**: either: pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a pre-trained model to load from cache
or download and cache if not already stored in cache (e.g. 'bert-base-uncased'). - a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
- a path to a `directory` containing a configuration file saved - a path to a `directory` containing model weights saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``.
using the `save_pretrained(save_directory)` method. - a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- a path or url to a tensorflow index checkpoint `file` (e.g. `./tf_model/model.ckpt.index`).
In this case, ``from_tf`` should be set to True and a configuration object should be model_args: (`optional`) Sequence of positional arguments:
provided as `config` argument. This loading option is slower than converting the TensorFlow All remaning positional arguments will be passed to the underlying model's ``__init__`` method
checkpoint in a PyTorch model using the provided conversion scripts and loading
the PyTorch model afterwards. config: (`optional`) instance of a class derived from :class:`~pytorch_transformers.PretrainedConfig`:
**model_args**: (`optional`) Sequence: Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
All remaining positional arguments will be passed to the underlying model's __init__ function
**config**: an optional configuration for the model to use instead of an automatically loaded configuration. - the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or
Configuration can be automatically loaded when: - the model was saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory.
- the model is a model provided by the library (loaded with a `shortcut name` of a pre-trained model), or - the model is loaded by suppling a local directory as ``pretrained_model_name_or_path`` and a configuration JSON file named `config.json` is found in the directory.
- the model was saved using the `save_pretrained(save_directory)` (loaded by supplying the save directory).
**state_dict**: an optional state dictionary for the model to use instead of a state dictionary loaded state_dict: (`optional`) dict:
from saved weights file. an optional state dictionnary for the model to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. This option can be used if you want to create a model from a pretrained configuration but load your own weights.
In this case though, you should check if using `save_pretrained(dir)` and `from_pretrained(save_directory)` is not In this case though, you should check if using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and :func:`~pytorch_transformers.PreTrainedModel.from_pretrained` is not a simpler option.
a simpler option.
**cache_dir**: (`optional`) string: cache_dir: (`optional`) string:
Path to a directory in which a downloaded pre-trained model Path to a directory in which a downloaded pre-trained model
configuration should be cached if the standard cache should not be used. configuration should be cached if the standard cache should not be used.
**output_loading_info**: (`optional`) boolean:
Set to ``True`` to also return a dictionary containing missing keys, unexpected keys and error messages. force_download: (`optional`) boolean, default False:
**kwargs**: (`optional`) dict: Force to (re-)download the model weights and configuration files and override the cached versions if they exists.
Dictionary of key, values to update the configuration object after loading.
Can be used to override selected configuration parameters. E.g. ``output_attention=True``. proxies: (`optional`) dict, default None:
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
- If a configuration is provided with `config`, **kwargs will be directly passed The proxies are used on each request.
to the underlying model's __init__ method.
- If a configuration is not provided, **kwargs will be first passed to the pretrained output_loading_info: (`optional`) boolean:
model configuration class loading function (`PretrainedConfig.from_pretrained`). Set to ``True`` to also return a dictionnary containing missing keys, unexpected keys and error messages.
Each key of **kwargs that corresponds to a configuration attribute
will be used to override said attribute with the supplied **kwargs value. kwargs: (`optional`) Remaining dictionary of keyword arguments:
Remaining keys that do not correspond to any configuration attribute will Can be used to update the configuration object (after it being loaded) and initiate the model. (e.g. ``output_attention=True``). Behave differently depending on whether a `config` is provided or automatically loaded:
be passed to the underlying model's __init__ function.
- If a configuration is provided with ``config``, ``**kwargs`` will be directly passed to the underlying model's ``__init__`` method (we assume all relevant updates to the configuration have already been done)
- If a configuration is not provided, ``kwargs`` will be first passed to the configuration class initialization function (:func:`~pytorch_transformers.PretrainedConfig.from_pretrained`). Each key of ``kwargs`` that corresponds to a configuration attribute will be used to override said attribute with the supplied ``kwargs`` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's ``__init__`` function.
Examples:: Examples::
...@@ -406,44 +416,46 @@ class AutoModelForSequenceClassification(object): ...@@ -406,44 +416,46 @@ class AutoModelForSequenceClassification(object):
To train the model, you should first set it back in training mode with `model.train()` To train the model, you should first set it back in training mode with `model.train()`
Params: Params:
**pretrained_model_name_or_path**: either: pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a pre-trained model to load from cache
or download and cache if not already stored in cache (e.g. 'bert-base-uncased'). - a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
- a path to a `directory` containing a configuration file saved - a path to a `directory` containing model weights saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``.
using the `save_pretrained(save_directory)` method. - a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- a path or url to a tensorflow index checkpoint `file` (e.g. `./tf_model/model.ckpt.index`).
In this case, ``from_tf`` should be set to True and a configuration object should be model_args: (`optional`) Sequence of positional arguments:
provided as `config` argument. This loading option is slower than converting the TensorFlow All remaning positional arguments will be passed to the underlying model's ``__init__`` method
checkpoint in a PyTorch model using the provided conversion scripts and loading
the PyTorch model afterwards. config: (`optional`) instance of a class derived from :class:`~pytorch_transformers.PretrainedConfig`:
**model_args**: (`optional`) Sequence: Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
All remaining positional arguments will be passed to the underlying model's __init__ function
**config**: an optional configuration for the model to use instead of an automatically loaded configuration. - the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or
Configuration can be automatically loaded when: - the model was saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory.
- the model is a model provided by the library (loaded with a `shortcut name` of a pre-trained model), or - the model is loaded by suppling a local directory as ``pretrained_model_name_or_path`` and a configuration JSON file named `config.json` is found in the directory.
- the model was saved using the `save_pretrained(save_directory)` (loaded by supplying the save directory).
**state_dict**: an optional state dictionary for the model to use instead of a state dictionary loaded state_dict: (`optional`) dict:
from saved weights file. an optional state dictionnary for the model to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. This option can be used if you want to create a model from a pretrained configuration but load your own weights.
In this case though, you should check if using `save_pretrained(dir)` and `from_pretrained(save_directory)` is not In this case though, you should check if using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and :func:`~pytorch_transformers.PreTrainedModel.from_pretrained` is not a simpler option.
a simpler option.
**cache_dir**: (`optional`) string: cache_dir: (`optional`) string:
Path to a directory in which a downloaded pre-trained model Path to a directory in which a downloaded pre-trained model
configuration should be cached if the standard cache should not be used. configuration should be cached if the standard cache should not be used.
**output_loading_info**: (`optional`) boolean:
Set to ``True`` to also return a dictionary containing missing keys, unexpected keys and error messages. force_download: (`optional`) boolean, default False:
**kwargs**: (`optional`) dict: Force to (re-)download the model weights and configuration files and override the cached versions if they exists.
Dictionary of key, values to update the configuration object after loading.
Can be used to override selected configuration parameters. E.g. ``output_attention=True``. proxies: (`optional`) dict, default None:
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
- If a configuration is provided with `config`, **kwargs will be directly passed The proxies are used on each request.
to the underlying model's __init__ method.
- If a configuration is not provided, **kwargs will be first passed to the pretrained output_loading_info: (`optional`) boolean:
model configuration class loading function (`PretrainedConfig.from_pretrained`). Set to ``True`` to also return a dictionnary containing missing keys, unexpected keys and error messages.
Each key of **kwargs that corresponds to a configuration attribute
will be used to override said attribute with the supplied **kwargs value. kwargs: (`optional`) Remaining dictionary of keyword arguments:
Remaining keys that do not correspond to any configuration attribute will Can be used to update the configuration object (after it being loaded) and initiate the model. (e.g. ``output_attention=True``). Behave differently depending on whether a `config` is provided or automatically loaded:
be passed to the underlying model's __init__ function.
- If a configuration is provided with ``config``, ``**kwargs`` will be directly passed to the underlying model's ``__init__`` method (we assume all relevant updates to the configuration have already been done)
- If a configuration is not provided, ``kwargs`` will be first passed to the configuration class initialization function (:func:`~pytorch_transformers.PretrainedConfig.from_pretrained`). Each key of ``kwargs`` that corresponds to a configuration attribute will be used to override said attribute with the supplied ``kwargs`` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's ``__init__`` function.
Examples:: Examples::
...@@ -509,44 +521,46 @@ class AutoModelForQuestionAnswering(object): ...@@ -509,44 +521,46 @@ class AutoModelForQuestionAnswering(object):
To train the model, you should first set it back in training mode with `model.train()` To train the model, you should first set it back in training mode with `model.train()`
Params: Params:
**pretrained_model_name_or_path**: either: pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a pre-trained model to load from cache
or download and cache if not already stored in cache (e.g. 'bert-base-uncased'). - a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
- a path to a `directory` containing a configuration file saved - a path to a `directory` containing model weights saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``.
using the `save_pretrained(save_directory)` method. - a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- a path or url to a tensorflow index checkpoint `file` (e.g. `./tf_model/model.ckpt.index`).
In this case, ``from_tf`` should be set to True and a configuration object should be model_args: (`optional`) Sequence of positional arguments:
provided as `config` argument. This loading option is slower than converting the TensorFlow All remaning positional arguments will be passed to the underlying model's ``__init__`` method
checkpoint in a PyTorch model using the provided conversion scripts and loading
the PyTorch model afterwards. config: (`optional`) instance of a class derived from :class:`~pytorch_transformers.PretrainedConfig`:
**model_args**: (`optional`) Sequence: Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
All remaining positional arguments will be passed to the underlying model's __init__ function
**config**: an optional configuration for the model to use instead of an automatically loaded configuration. - the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or
Configuration can be automatically loaded when: - the model was saved using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and is reloaded by suppling the save directory.
- the model is a model provided by the library (loaded with a `shortcut name` of a pre-trained model), or - the model is loaded by suppling a local directory as ``pretrained_model_name_or_path`` and a configuration JSON file named `config.json` is found in the directory.
- the model was saved using the `save_pretrained(save_directory)` (loaded by supplying the save directory).
**state_dict**: an optional state dictionary for the model to use instead of a state dictionary loaded state_dict: (`optional`) dict:
from saved weights file. an optional state dictionnary for the model to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. This option can be used if you want to create a model from a pretrained configuration but load your own weights.
In this case though, you should check if using `save_pretrained(dir)` and `from_pretrained(save_directory)` is not In this case though, you should check if using :func:`~pytorch_transformers.PreTrainedModel.save_pretrained` and :func:`~pytorch_transformers.PreTrainedModel.from_pretrained` is not a simpler option.
a simpler option.
**cache_dir**: (`optional`) string: cache_dir: (`optional`) string:
Path to a directory in which a downloaded pre-trained model Path to a directory in which a downloaded pre-trained model
configuration should be cached if the standard cache should not be used. configuration should be cached if the standard cache should not be used.
**output_loading_info**: (`optional`) boolean:
Set to ``True`` to also return a dictionary containing missing keys, unexpected keys and error messages. force_download: (`optional`) boolean, default False:
**kwargs**: (`optional`) dict: Force to (re-)download the model weights and configuration files and override the cached versions if they exists.
Dictionary of key, values to update the configuration object after loading.
Can be used to override selected configuration parameters. E.g. ``output_attention=True``. proxies: (`optional`) dict, default None:
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
- If a configuration is provided with `config`, **kwargs will be directly passed The proxies are used on each request.
to the underlying model's __init__ method.
- If a configuration is not provided, **kwargs will be first passed to the pretrained output_loading_info: (`optional`) boolean:
model configuration class loading function (`PretrainedConfig.from_pretrained`). Set to ``True`` to also return a dictionnary containing missing keys, unexpected keys and error messages.
Each key of **kwargs that corresponds to a configuration attribute
will be used to override said attribute with the supplied **kwargs value. kwargs: (`optional`) Remaining dictionary of keyword arguments:
Remaining keys that do not correspond to any configuration attribute will Can be used to update the configuration object (after it being loaded) and initiate the model. (e.g. ``output_attention=True``). Behave differently depending on whether a `config` is provided or automatically loaded:
be passed to the underlying model's __init__ function.
- If a configuration is provided with ``config``, ``**kwargs`` will be directly passed to the underlying model's ``__init__`` method (we assume all relevant updates to the configuration have already been done)
- If a configuration is not provided, ``kwargs`` will be first passed to the configuration class initialization function (:func:`~pytorch_transformers.PretrainedConfig.from_pretrained`). Each key of ``kwargs`` that corresponds to a configuration attribute will be used to override said attribute with the supplied ``kwargs`` value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model's ``__init__`` function.
Examples:: Examples::
......
...@@ -59,6 +59,12 @@ if not six.PY2: ...@@ -59,6 +59,12 @@ if not six.PY2:
fn.__doc__ = ''.join(docstr) + fn.__doc__ fn.__doc__ = ''.join(docstr) + fn.__doc__
return fn return fn
return docstring_decorator return docstring_decorator
def add_end_docstrings(*docstr):
def docstring_decorator(fn):
fn.__doc__ = fn.__doc__ + ''.join(docstr)
return fn
return docstring_decorator
else: else:
# Not possible to update class docstrings on python2 # Not possible to update class docstrings on python2
def add_start_docstrings(*docstr): def add_start_docstrings(*docstr):
...@@ -66,6 +72,11 @@ else: ...@@ -66,6 +72,11 @@ else:
return fn return fn
return docstring_decorator return docstring_decorator
def add_end_docstrings(*docstr):
def docstring_decorator(fn):
return fn
return docstring_decorator
class PretrainedConfig(object): class PretrainedConfig(object):
r""" Base class for all configuration classes. r""" Base class for all configuration classes.
......
...@@ -69,15 +69,25 @@ class AutoTokenizer(object): ...@@ -69,15 +69,25 @@ class AutoTokenizer(object):
- contains `roberta`: RobertaTokenizer (XLM model) - contains `roberta`: RobertaTokenizer (XLM model)
Params: Params:
**pretrained_model_name_or_path**: either: pretrained_model_name_or_path: either:
- a string with the `shortcut name` of a pre-trained model configuration to load from cache
or download and cache if not already stored in cache (e.g. 'bert-base-uncased'). - a string with the `shortcut name` of a predefined tokenizer to load from cache or download, e.g.: ``bert-base-uncased``.
- a path to a `directory` containing a configuration file saved - a path to a `directory` containing vocabulary files required by the tokenizer, for instance saved using the :func:`~pytorch_transformers.PreTrainedTokenizer.save_pretrained` method, e.g.: ``./my_model_directory/``.
using the `save_pretrained(save_directory)` method. - (not applicable to all derived classes) a path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (e.g. Bert, XLNet), e.g.: ``./my_model_directory/vocab.txt``.
- a path or url to a saved configuration `file`.
**cache_dir**: (`optional`) string: cache_dir: (`optional`) string:
Path to a directory in which a downloaded pre-trained model Path to a directory in which a downloaded predefined tokenizer vocabulary files should be cached if the standard cache should not be used.
configuration should be cached if the standard cache should not be used.
force_download: (`optional`) boolean, default False:
Force to (re-)download the vocabulary files and override the cached versions if they exists.
proxies: (`optional`) dict, default None:
A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.
inputs: (`optional`) positional arguments: will be passed to the Tokenizer ``__init__`` method.
kwargs: (`optional`) keyword arguments: will be passed to the Tokenizer ``__init__`` method. Can be used to set special tokens like ``bos_token``, ``eos_token``, ``unk_token``, ``sep_token``, ``pad_token``, ``cls_token``, ``mask_token``, ``additional_special_tokens``. See parameters in the doc string of :class:`~pytorch_transformers.PreTrainedTokenizer` for details.
Examples:: Examples::
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment