Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
d344534b
Unverified
Commit
d344534b
authored
Aug 12, 2022
by
Stas Bekman
Committed by
GitHub
Aug 12, 2022
Browse files
typos (#18594)
parent
3cdaea47
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
src/transformers/generation_utils.py
src/transformers/generation_utils.py
+1
-1
src/transformers/models/fsmt/modeling_fsmt.py
src/transformers/models/fsmt/modeling_fsmt.py
+1
-1
No files found.
src/transformers/generation_utils.py
View file @
d344534b
...
@@ -1200,7 +1200,7 @@ class GenerationMixin:
...
@@ -1200,7 +1200,7 @@ class GenerationMixin:
input_ids_seq_length
=
input_ids
.
shape
[
-
1
]
input_ids_seq_length
=
input_ids
.
shape
[
-
1
]
if
max_length
is
None
and
max_new_tokens
is
None
:
if
max_length
is
None
and
max_new_tokens
is
None
:
warnings
.
warn
(
warnings
.
warn
(
"Neither `max_length` nor `max_new_tokens` ha
ve
been set, `max_length` will default to "
"Neither `max_length` nor `max_new_tokens` ha
s
been set, `max_length` will default to "
f
"
{
self
.
config
.
max_length
}
(`self.config.max_length`). Controlling `max_length` via the config is "
f
"
{
self
.
config
.
max_length
}
(`self.config.max_length`). Controlling `max_length` via the config is "
"deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend "
"deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend "
"using `max_new_tokens` to control the maximum length of the generation."
,
"using `max_new_tokens` to control the maximum length of the generation."
,
...
...
src/transformers/models/fsmt/modeling_fsmt.py
View file @
d344534b
...
@@ -220,7 +220,7 @@ FSMT_INPUTS_DOCSTRING = r"""
...
@@ -220,7 +220,7 @@ FSMT_INPUTS_DOCSTRING = r"""
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary.
Indices of input sequence tokens in the vocabulary.
I
Indices can be obtained using [`FSTMTokenizer`]. See [`PreTrainedTokenizer.encode`] and
Indices can be obtained using [`FSTMTokenizer`]. See [`PreTrainedTokenizer.encode`] and
[`PreTrainedTokenizer.__call__`] for details.
[`PreTrainedTokenizer.__call__`] for details.
[What are input IDs?](../glossary#input-ids)
[What are input IDs?](../glossary#input-ids)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment