Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
227cd54a
Unverified
Commit
227cd54a
authored
Feb 27, 2024
by
Sadra Barikbin
Committed by
GitHub
Feb 27, 2024
Browse files
Fix a few typos in `GenerationMixin`'s docstring (#29277)
Co-authored-by:
Joao Gante
<
joao@huggingface.co
>
parent
ddf7ac42
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
9 deletions
+9
-9
src/transformers/generation/utils.py
src/transformers/generation/utils.py
+9
-9
No files found.
src/transformers/generation/utils.py
View file @
227cd54a
...
@@ -143,7 +143,7 @@ class GenerateEncoderDecoderOutput(ModelOutput):
...
@@ -143,7 +143,7 @@ class GenerateEncoderDecoderOutput(ModelOutput):
Outputs of encoder-decoder generation models, when using non-beam methods.
Outputs of encoder-decoder generation models, when using non-beam methods.
Args:
Args:
sequences (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
sequences (`torch.LongTensor` of shape `(batch_size
*num_return_sequences
, sequence_length)`):
The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter
The generated sequences. The second dimension (sequence_length) is either equal to `max_length` or shorter
if all batches finished early due to the `eos_token_id`.
if all batches finished early due to the `eos_token_id`.
scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`):
scores (`tuple(torch.FloatTensor)` *optional*, returned when `output_scores=True` is passed or when `config.output_scores=True`):
...
@@ -204,7 +204,7 @@ class GenerateBeamDecoderOnlyOutput(ModelOutput):
...
@@ -204,7 +204,7 @@ class GenerateBeamDecoderOnlyOutput(ModelOutput):
Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting
Beam transition scores for each vocabulary token at each generation step. Beam transition scores consisting
of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam.
of log probabilities of tokens conditioned on log softmax of previously generated tokens in this beam.
Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token),
Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token),
with each tensor of shape `(batch_size*num_beams
*num_return_sequences
, config.vocab_size)`.
with each tensor of shape `(batch_size*num_beams, config.vocab_size)`.
logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True` is passed or when `config.output_logits=True`):
logits (`tuple(torch.FloatTensor)` *optional*, returned when `output_logits=True` is passed or when `config.output_logits=True`):
Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
Unprocessed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax)
at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for
at each generation step. Tuple of `torch.FloatTensor` with up to `max_new_tokens` elements (one element for
...
@@ -981,9 +981,9 @@ class GenerationMixin:
...
@@ -981,9 +981,9 @@ class GenerationMixin:
shorter if all batches finished early due to the `eos_token_id`.
shorter if all batches finished early due to the `eos_token_id`.
scores (`tuple(torch.FloatTensor)`):
scores (`tuple(torch.FloatTensor)`):
Transition scores for each vocabulary token at each generation step. Beam transition scores consisting
Transition scores for each vocabulary token at each generation step. Beam transition scores consisting
of log probabilities of tokens conditioned on log softmax of previously generated tokens
Tuple of
of log probabilities of tokens conditioned on log softmax of previously generated tokens
in this beam.
`torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token),
with
Tuple of
`torch.FloatTensor` with up to `max_new_tokens` elements (one element for each generated token),
each tensor of shape `(batch_size*num_beams, config.vocab_size)`.
with
each tensor of shape `(batch_size*num_beams, config.vocab_size)`.
beam_indices (`torch.LongTensor`, *optional*):
beam_indices (`torch.LongTensor`, *optional*):
Beam indices of generated token id at each generation step. `torch.LongTensor` of shape
Beam indices of generated token id at each generation step. `torch.LongTensor` of shape
`(batch_size*num_return_sequences, sequence_length)`. Only required if a `num_beams>1` at
`(batch_size*num_return_sequences, sequence_length)`. Only required if a `num_beams>1` at
...
@@ -1251,12 +1251,12 @@ class GenerationMixin:
...
@@ -1251,12 +1251,12 @@ class GenerationMixin:
inputs (`torch.Tensor` of varying shape depending on the modality, *optional*):
inputs (`torch.Tensor` of varying shape depending on the modality, *optional*):
The sequence used as a prompt for the generation or as model inputs to the encoder. If `None` the
The sequence used as a prompt for the generation or as model inputs to the encoder. If `None` the
method initializes it with `bos_token_id` and a batch size of 1. For decoder-only models `inputs`
method initializes it with `bos_token_id` and a batch size of 1. For decoder-only models `inputs`
should
of
in the format of `input_ids`. For encoder-decoder models *inputs* can represent any of
should
be
in the format of `input_ids`. For encoder-decoder models *inputs* can represent any of
`input_ids`, `input_values`, `input_features`, or `pixel_values`.
`input_ids`, `input_values`, `input_features`, or `pixel_values`.
generation_config (`~generation.GenerationConfig`, *optional*):
generation_config (`~generation.GenerationConfig`, *optional*):
The generation configuration to be used as base parametrization for the generation call. `**kwargs`
The generation configuration to be used as base parametrization for the generation call. `**kwargs`
passed to generate matching the attributes of `generation_config` will override them. If
passed to generate matching the attributes of `generation_config` will override them. If
`generation_config` is not provided, the default will be used, which ha
d
the following loading
`generation_config` is not provided, the default will be used, which ha
s
the following loading
priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model
priority: 1) from the `generation_config.json` model file, if it exists; 2) from the model
configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s
configuration. Please note that unspecified parameters will inherit [`~generation.GenerationConfig`]'s
default values, whose documentation should be checked to parameterize generation.
default values, whose documentation should be checked to parameterize generation.
...
@@ -1265,7 +1265,7 @@ class GenerationMixin:
...
@@ -1265,7 +1265,7 @@ class GenerationMixin:
generation config. If a logit processor is passed that is already created with the arguments or a
generation config. If a logit processor is passed that is already created with the arguments or a
generation config an error is thrown. This feature is intended for advanced users.
generation config an error is thrown. This feature is intended for advanced users.
stopping_criteria (`StoppingCriteriaList`, *optional*):
stopping_criteria (`StoppingCriteriaList`, *optional*):
Custom stopping criteria that complement the default stopping criteria built from arguments and a
Custom stopping criteria that complement
s
the default stopping criteria built from arguments and a
generation config. If a stopping criteria is passed that is already created with the arguments or a
generation config. If a stopping criteria is passed that is already created with the arguments or a
generation config an error is thrown. If your stopping criteria depends on the `scores` input, make
generation config an error is thrown. If your stopping criteria depends on the `scores` input, make
sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`. This feature is
sure you pass `return_dict_in_generate=True, output_scores=True` to `generate`. This feature is
...
@@ -1295,7 +1295,7 @@ class GenerationMixin:
...
@@ -1295,7 +1295,7 @@ class GenerationMixin:
negative_prompt_attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
negative_prompt_attention_mask (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Attention_mask for `negative_prompt_ids`.
Attention_mask for `negative_prompt_ids`.
kwargs (`Dict[str, Any]`, *optional*):
kwargs (`Dict[str, Any]`, *optional*):
Ad hoc parametrization of `generat
e
_config` and/or additional model-specific kwargs that will be
Ad hoc parametrization of `generat
ion
_config` and/or additional model-specific kwargs that will be
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
forwarded to the `forward` function of the model. If the model is an encoder-decoder model, encoder
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with *decoder_*.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment