Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
2642d8d0
"vscode:/vscode.git/clone" did not exist on "aef4cf8c52ce81469190d9b7d38695c1f27b35cf"
Unverified
Commit
2642d8d0
authored
Jul 11, 2023
by
Joao Gante
Committed by
GitHub
Jul 11, 2023
Browse files
Docs: add `kwargs` type to fix formatting (#24733)
parent
5739726f
Changes
30
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
26 additions
and
33 deletions
+26
-33
src/transformers/configuration_utils.py
src/transformers/configuration_utils.py
+1
-1
src/transformers/feature_extraction_utils.py
src/transformers/feature_extraction_utils.py
+1
-1
src/transformers/generation/configuration_utils.py
src/transformers/generation/configuration_utils.py
+1
-1
src/transformers/generation/flax_logits_process.py
src/transformers/generation/flax_logits_process.py
+1
-1
src/transformers/generation/flax_utils.py
src/transformers/generation/flax_utils.py
+1
-1
src/transformers/generation/logits_process.py
src/transformers/generation/logits_process.py
+1
-1
src/transformers/generation/stopping_criteria.py
src/transformers/generation/stopping_criteria.py
+1
-1
src/transformers/generation/tf_logits_process.py
src/transformers/generation/tf_logits_process.py
+1
-1
src/transformers/generation/tf_utils.py
src/transformers/generation/tf_utils.py
+1
-1
src/transformers/generation/utils.py
src/transformers/generation/utils.py
+1
-1
src/transformers/hf_argparser.py
src/transformers/hf_argparser.py
+2
-2
src/transformers/image_processing_utils.py
src/transformers/image_processing_utils.py
+1
-1
src/transformers/modeling_flax_utils.py
src/transformers/modeling_flax_utils.py
+1
-1
src/transformers/modeling_tf_utils.py
src/transformers/modeling_tf_utils.py
+4
-5
src/transformers/modeling_utils.py
src/transformers/modeling_utils.py
+1
-2
src/transformers/models/jukebox/tokenization_jukebox.py
src/transformers/models/jukebox/tokenization_jukebox.py
+0
-5
src/transformers/models/musicgen/modeling_musicgen.py
src/transformers/models/musicgen/modeling_musicgen.py
+2
-2
src/transformers/models/rag/modeling_rag.py
src/transformers/models/rag/modeling_rag.py
+2
-2
src/transformers/models/rag/modeling_tf_rag.py
src/transformers/models/rag/modeling_tf_rag.py
+2
-2
src/transformers/models/whisper/modeling_tf_whisper.py
src/transformers/models/whisper/modeling_tf_whisper.py
+1
-1
No files found.
src/transformers/configuration_utils.py
View file @
2642d8d0
...
...
@@ -432,7 +432,7 @@ class PretrainedConfig(PushToHubMixin):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
if
os
.
path
.
isfile
(
save_directory
):
...
...
src/transformers/feature_extraction_utils.py
View file @
2642d8d0
...
...
@@ -379,7 +379,7 @@ class FeatureExtractionMixin(PushToHubMixin):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
if
os
.
path
.
isfile
(
save_directory
):
...
...
src/transformers/generation/configuration_utils.py
View file @
2642d8d0
...
...
@@ -353,7 +353,7 @@ class GenerationConfig(PushToHubMixin):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
config_file_name
=
config_file_name
if
config_file_name
is
not
None
else
GENERATION_CONFIG_NAME
...
...
src/transformers/generation/flax_logits_process.py
View file @
2642d8d0
...
...
@@ -38,7 +38,7 @@ LOGITS_PROCESSOR_INPUTS_DOCSTRING = r"""
scores (`jnp.ndarray` of shape `(batch_size, config.vocab_size)`):
Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
search or log softmax for each vocabulary token when using beam search
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional logits processor specific kwargs.
Return:
...
...
src/transformers/generation/flax_utils.py
View file @
2642d8d0
...
...
@@ -296,7 +296,7 @@ class FlaxGenerationMixin:
Custom logits processors that complement the default logits processors built from arguments and
generation config. If a logit processor is passed that is already created with the arguments or a
generation config an error is thrown. This feature is intended for advanced users.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
...
...
src/transformers/generation/logits_process.py
View file @
2642d8d0
...
...
@@ -39,7 +39,7 @@ LOGITS_PROCESSOR_INPUTS_DOCSTRING = r"""
scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`):
Prediction scores of a language modeling head. These can be logits for each vocabulary when not using beam
search or log softmax for each vocabulary token when using beam search
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional logits processor specific kwargs.
Return:
...
...
src/transformers/generation/stopping_criteria.py
View file @
2642d8d0
...
...
@@ -24,7 +24,7 @@ STOPPING_CRITERIA_INPUTS_DOCSTRING = r"""
scores (`torch.FloatTensor` of shape `(batch_size, config.vocab_size)`):
Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax
or scores for each vocabulary token after SoftMax.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional stopping criteria specific kwargs.
Return:
...
...
src/transformers/generation/tf_logits_process.py
View file @
2642d8d0
...
...
@@ -42,7 +42,7 @@ TF_LOGITS_PROCESSOR_INPUTS_DOCSTRING = r"""
cur_len (`int`):
The current length of valid input sequence tokens. In the TF implementation, the input_ids' sequence length
is the maximum length generate can produce, and we need to know which of its tokens are valid.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional logits processor specific kwargs.
Return:
...
...
src/transformers/generation/tf_utils.py
View file @
2642d8d0
...
...
@@ -705,7 +705,7 @@ class TFGenerationMixin:
seed (`List[int]`, *optional*):
Random seed to control sampling, containing two integers, used when `do_sample` is `True`. See the
`seed` argument from stateless functions in `tf.random`.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
...
...
src/transformers/generation/utils.py
View file @
2642d8d0
...
...
@@ -1225,7 +1225,7 @@ class GenerationMixin:
streamer (`BaseStreamer`, *optional*):
Streamer object that will be used to stream the generated sequences. Generated tokens are passed
through `streamer.put(token_ids)` and the streamer is responsible for any further processing.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
...
...
src/transformers/hf_argparser.py
View file @
2642d8d0
...
...
@@ -122,8 +122,8 @@ class HfArgumentParser(ArgumentParser):
Args:
dataclass_types:
Dataclass type, or list of dataclass types for which we will "fill" instances with the parsed args.
kwargs:
(Optional)
Passed to `argparse.ArgumentParser()` in the regular way.
kwargs
(`Dict[str, Any]`, *optional*)
:
Passed to `argparse.ArgumentParser()` in the regular way.
"""
# To make the default appear when using --help
if
"formatter_class"
not
in
kwargs
:
...
...
src/transformers/image_processing_utils.py
View file @
2642d8d0
...
...
@@ -208,7 +208,7 @@ class ImageProcessingMixin(PushToHubMixin):
Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the
repository you want to push to with `repo_id` (will default to the name of `save_directory` in your
namespace).
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
if
os
.
path
.
isfile
(
save_directory
):
...
...
src/transformers/modeling_flax_utils.py
View file @
2642d8d0
...
...
@@ -1043,7 +1043,7 @@ class FlaxPreTrainedModel(PushToHubMixin, FlaxGenerationMixin):
</Tip>
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
if
os
.
path
.
isfile
(
save_directory
):
...
...
src/transformers/modeling_tf_utils.py
View file @
2642d8d0
...
...
@@ -2371,8 +2371,7 @@ class TFPreTrainedModel(tf.keras.Model, TFModelUtilsMixin, TFGenerationMixin, Pu
Whether or not to create a PR with the uploaded files or directly commit.
safe_serialization (`bool`, *optional*, defaults to `False`):
Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
if
os
.
path
.
isfile
(
save_directory
):
...
...
@@ -3166,7 +3165,7 @@ class TFConv1D(tf.keras.layers.Layer):
The number of input features.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation to use to initialize the weights.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional keyword arguments passed along to the `__init__` of `tf.keras.layers.Layer`.
"""
...
...
@@ -3208,7 +3207,7 @@ class TFSharedEmbeddings(tf.keras.layers.Layer):
initializer_range (`float`, *optional*):
The standard deviation to use when initializing the weights. If no value is provided, it will default to
\\(1/\sqrt{hidden\_size}\\).
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional keyword arguments passed along to the `__init__` of `tf.keras.layers.Layer`.
"""
# TODO (joao): flagged for delection due to embeddings refactor
...
...
@@ -3322,7 +3321,7 @@ class TFSequenceSummary(tf.keras.layers.Layer):
- **summary_last_dropout** (`float`)-- Optional dropout probability after the projection and activation.
initializer_range (`float`, defaults to 0.02): The standard deviation to use to initialize the weights.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional keyword arguments passed along to the `__init__` of `tf.keras.layers.Layer`.
"""
...
...
src/transformers/modeling_utils.py
View file @
2642d8d0
...
...
@@ -1700,8 +1700,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
variant (`str`, *optional*):
If specified, weights are saved in the format pytorch_model.<variant>.bin.
kwargs:
kwargs (`Dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
# Checks if the model has been loaded in 8-bit
...
...
src/transformers/models/jukebox/tokenization_jukebox.py
View file @
2642d8d0
...
...
@@ -202,9 +202,6 @@ class JukeboxTokenizer(PreTrainedTokenizer):
"""
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining `kwargs` as well. We test the
`kwargs` at the end of the encoding process to be sure all the arguments have been used.
Args:
artist (`str`):
The artist name to prepare. This will mostly lower the string
...
...
@@ -216,8 +213,6 @@ class JukeboxTokenizer(PreTrainedTokenizer):
Whether or not the input is already pre-tokenized (e.g., split into words). If set to `True`, the
tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace)
which it will tokenize. This is useful for NER or token classification.
kwargs:
Keyword arguments to use for the tokenization.
"""
for
idx
in
range
(
len
(
self
.
version
)):
if
self
.
version
[
idx
]
==
"v3"
:
...
...
src/transformers/models/musicgen/modeling_musicgen.py
View file @
2642d8d0
...
...
@@ -1228,7 +1228,7 @@ class MusicgenForCausalLM(MusicgenPreTrainedModel):
generation config an error is thrown. This feature is intended for advanced users.
synced_gpus (`bool`, *optional*, defaults to `False`):
Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
...
...
@@ -2225,7 +2225,7 @@ class MusicgenForConditionalGeneration(PreTrainedModel):
generation config an error is thrown. This feature is intended for advanced users.
synced_gpus (`bool`, *optional*, defaults to `False`):
Whether to continue running the while loop until max_length (needed for ZeRO stage 3)
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
...
...
src/transformers/models/rag/modeling_rag.py
View file @
2642d8d0
...
...
@@ -962,7 +962,7 @@ class RagSequenceForGeneration(RagPreTrainedModel):
Number of beams for beam search. 1 means no beam search.
n_docs (`int`, *optional*, defaults to `config.n_docs`)
Number of documents to retrieve and/or number of documents for which to generate an answer.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional kwargs will be passed to [`~generation.GenerationMixin.generate`].
Return:
...
...
@@ -1444,7 +1444,7 @@ class RagTokenForGeneration(RagPreTrainedModel):
Custom stopping criteria that complement the default stopping criteria built from arguments and a
model's config. If a stopping criteria is passed that is already created with the arguments or a
model's config an error is thrown.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model.
...
...
src/transformers/models/rag/modeling_tf_rag.py
View file @
2642d8d0
...
...
@@ -1051,7 +1051,7 @@ class TFRagTokenForGeneration(TFRagPreTrainedModel, TFCausalLanguageModelingLoss
Custom logits processors that complement the default logits processors built from arguments and a
model's config. If a logit processor is passed that is already created with the arguments or a model's
config an error is thrown.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model.
...
...
@@ -1629,7 +1629,7 @@ class TFRagSequenceForGeneration(TFRagPreTrainedModel, TFCausalLanguageModelingL
Number of beams for beam search. 1 means no beam search.
n_docs (`int`, *optional*, defaults to `config.n_docs`)
Number of documents to retrieve and/or number of documents for which to generate an answer.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Additional kwargs will be passed to [`~generation.GenerationMixin.generate`]
Return:
...
...
src/transformers/models/whisper/modeling_tf_whisper.py
View file @
2642d8d0
...
...
@@ -1394,7 +1394,7 @@ class TFWhisperForConditionalGeneration(TFWhisperPreTrainedModel, TFCausalLangua
Whether to return token-level timestamps with the text. This can be used with or without the
`return_timestamps` option. To get word-level timestamps, use the tokenizer to group the tokens into
words.
kwargs:
kwargs
(`Dict[str, Any]`, *optional*)
:
Ad hoc parametrization of `generate_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
...
...
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment