Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
2642d8d0
Unverified
Commit
2642d8d0
authored
Jul 11, 2023
by
Joao Gante
Committed by
GitHub
Jul 11, 2023
Browse files
Docs: add `kwargs` type to fix formatting (#24733)
parent
5739726f
Changes
30
Show whitespace changes
Inline
Side-by-side
Showing
10 changed files
with
12 additions
and
12 deletions
+12
-12
src/transformers/models/whisper/modeling_whisper.py
src/transformers/models/whisper/modeling_whisper.py
+1
-1
src/transformers/optimization_tf.py
src/transformers/optimization_tf.py
+1
-1
src/transformers/pipelines/__init__.py
src/transformers/pipelines/__init__.py
+2
-2
src/transformers/processing_utils.py
src/transformers/processing_utils.py
+1
-1
src/transformers/tokenization_utils.py
src/transformers/tokenization_utils.py
+1
-1
src/transformers/tokenization_utils_base.py
src/transformers/tokenization_utils_base.py
+1
-1
src/transformers/tokenization_utils_fast.py
src/transformers/tokenization_utils_fast.py
+1
-1
src/transformers/tools/agents.py
src/transformers/tools/agents.py
+1
-1
src/transformers/trainer.py
src/transformers/trainer.py
+2
-2
src/transformers/trainer_pt_utils.py
src/transformers/trainer_pt_utils.py
+1
-1
No files found.
src/transformers/models/whisper/modeling_whisper.py
View file @
2642d8d0
...
...
@@ -1608,7 +1608,7 @@ class WhisperForConditionalGeneration(WhisperPreTrainedModel):
Whether to return token-level timestamps with the text. This can be used with or without the
`return_timestamps` option. To get word-level timestamps, use the tokenizer to group the tokens into
words.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
...
...
src/transformers/optimization_tf.py
View file @
2642d8d0
...
...
@@ -201,7 +201,7 @@ class AdamWeightDecay(Adam):
`include_in_weight_decay` is passed, the names in it will supersede this list.
name (`str`, *optional*, defaults to 'AdamWeightDecay'):
Optional name for the operations created when applying gradients.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Keyword arguments. Allowed to be {`clipnorm`, `clipvalue`, `lr`, `decay`}. `clipnorm` is clip gradients by
norm; `clipvalue` is clip gradients by value, `decay` is included for backward compatibility to allow time
inverse decay of learning rate. `lr` is included for backward compatibility, recommended to use
...
...
src/transformers/pipelines/__init__.py
View file @
2642d8d0
...
...
@@ -634,10 +634,10 @@ def pipeline(
Whether or not to allow for custom code defined on the Hub in their own modeling, configuration,
tokenization or even pipeline files. This option should only be set to `True` for repositories you trust
and in which you have read the code, as it will execute code present on the Hub on your local machine.
model_kwargs:
model_kwargs
(`Dict[str, Any]`, *optional*)
:
Additional dictionary of keyword arguments passed along to the model's `from_pretrained(...,
**model_kwargs)` function.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional keyword arguments passed along to the specific pipeline init (see the documentation for the
corresponding pipeline class for possible values).
...
...
src/transformers/processing_utils.py
View file @
2642d8d0
...
...
@@ -111,7 +111,7 @@ class ProcessorMixin(PushToHubMixin):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
os
.
makedirs
(
save_directory
,
exist_ok
=
True
)
...
...
src/transformers/tokenization_utils.py
View file @
2642d8d0
...
...
@@ -834,7 +834,7 @@ class PreTrainedTokenizer(PreTrainedTokenizerBase):
Whether or not the input is already pre-tokenized (e.g., split into words). If set to `True`, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Keyword arguments to use for the tokenization.
Returns:
...
...
src/transformers/tokenization_utils_base.py
View file @
2642d8d0
...
...
@@ -2133,7 +2133,7 @@ class PreTrainedTokenizerBase(SpecialTokensMixin, PushToHubMixin):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
Returns:
...
...
src/transformers/tokenization_utils_fast.py
View file @
2642d8d0
...
...
@@ -630,7 +630,7 @@ class PreTrainedTokenizerFast(PreTrainedTokenizerBase):
special_tokens_map (`Dict[str, str]`, *optional*):
If you want to rename some of the special tokens this tokenizer uses, pass along a mapping old special
token name to new special token name in this argument.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional keyword arguments passed along to the trainer from the 🤗 Tokenizers library.
Returns:
...
...
src/transformers/tools/agents.py
View file @
2642d8d0
...
...
@@ -704,7 +704,7 @@ class LocalAgent(Agent):
Args:
pretrained_model_name_or_path (`str` or `os.PathLike`):
The name of a repo on the Hub or a local path to a folder containing both model and tokenizer.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Keyword arguments passed along to [`~PreTrainedModel.from_pretrained`].
Example:
...
...
src/transformers/trainer.py
View file @
2642d8d0
...
...
@@ -1475,7 +1475,7 @@ class Trainer:
ignore_keys_for_eval (`List[str]`, *optional*)
A list of keys in the output of your model (if it is a dictionary) that should be ignored when
gathering predictions for evaluation during the training.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional keyword arguments used to hide deprecated arguments
"""
if
resume_from_checkpoint
is
False
:
...
...
@@ -3567,7 +3567,7 @@ class Trainer:
Message to commit while pushing.
blocking (`bool`, *optional*, defaults to `True`):
Whether the function should return only when the `git push` has finished.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional keyword arguments passed along to [`~Trainer.create_model_card`].
Returns:
...
...
src/transformers/trainer_pt_utils.py
View file @
2642d8d0
...
...
@@ -257,7 +257,7 @@ class DistributedSamplerWithLoop(DistributedSampler):
Dataset used for sampling.
batch_size (`int`):
The batch size used with this sampler
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
All other keyword arguments passed to `DistributedSampler`.
"""
...
...
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment