"docs/vscode:/vscode.git/clone" did not exist on "85817d98fb60977c97e3014196a462b732d2ed1a"
Unverified Commit 2642d8d0 authored by Joao Gante's avatar Joao Gante Committed by GitHub
Browse files

Docs: add `kwargs` type to fix formatting (#24733)

parent 5739726f
......@@ -1608,7 +1608,7 @@ class WhisperForConditionalGeneration(WhisperPreTrainedModel):
Whether to return token-level timestamps with the text. This can be used with or without the
`return_timestamps` option. To get word-level timestamps, use the tokenizer to group the tokens into
words.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
......
......@@ -201,7 +201,7 @@ class AdamWeightDecay(Adam):
`include_in_weight_decay` is passed, the names in it will supersede this list.
name (`str`, *optional*, defaults to 'AdamWeightDecay'):
Optional name for the operations created when applying gradients.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, `decay`}. `clipnorm` is clip gradients by
norm; `clipvalue` is clip gradients by value, `decay` is included for backward compatibility to allow time
inverse decay of learning rate. `lr` is included for backward compatibility, recommended to use
......
......@@ -634,10 +634,10 @@ def pipeline(
Whether or not to allow for custom code defined on the Hub in their own modeling, configuration,
tokenization or even pipeline files. This option should only be set to `True` for repositories you trust
and in which you have read the code, as it will execute code present on the Hub on your local machine.
model_kwargs:
model_kwargs (`Dict[str, Any]`, *optional*):
Additional dictionary of keyword arguments passed along to the model's `from_pretrained(...,
**model_kwargs)` function.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Additional keyword arguments passed along to the specific pipeline init (see the documentation for the
corresponding pipeline class for possible values).
......
......@@ -111,7 +111,7 @@ class ProcessorMixin(PushToHubMixin):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
os.makedirs(save_directory, exist_ok=True)
......
......@@ -834,7 +834,7 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
Whether or not the input is already pre-tokenized (e.g., split into words). If set to `True`, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Keyword arguments to use for the tokenization.
Returns:
......
......@@ -2133,7 +2133,7 @@ class PreTrainedTokenizerBase(SpecialTokensMixin, PushToHubMixin):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
Returns:
......
......@@ -630,7 +630,7 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
special_tokens_map (`Dict[str, str]`, *optional*):
If you want to rename some of the special tokens this tokenizer uses, pass along a mapping old special
token name to new special token name in this argument.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Additional keyword arguments passed along to the trainer from the 🤗 Tokenizers library.
Returns:
......
......@@ -704,7 +704,7 @@ class LocalAgent(Agent):
Args:
pretrained_model_name_or_path (`str` or `os.PathLike`):
The name of a repo on the Hub or a local path to a folder containing both model and tokenizer.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Keyword arguments passed along to [`~PreTrainedModel.from_pretrained`].
Example:
......
......@@ -1475,7 +1475,7 @@ class Trainer:
ignore_keys_for_eval (`List[str]`, *optional*)
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions for evaluation during the training.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Additional keyword arguments used to hide deprecated arguments
"""
if resume_from_checkpoint is False:
......@@ -3567,7 +3567,7 @@ class Trainer:
Message to commit while pushing.
blocking (`bool`, *optional*, defaults to `True`):
Whether the function should return only when the `git push` has finished.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Additional keyword arguments passed along to [`~Trainer.create_model_card`].
Returns:
......
......@@ -257,7 +257,7 @@ class DistributedSamplerWithLoop(DistributedSampler):
Dataset used for sampling.
batch_size (`int`):
The batch size used with this sampler
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
All other keyword arguments passed to `DistributedSampler`.
"""
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment