Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
48706c71
Unverified
Commit
48706c71
authored
Apr 06, 2023
by
Joao Gante
Committed by
GitHub
Apr 06, 2023
Browse files
Seq2SeqTrainer: use unwrapped model to retrieve the generation config (#22584)
parent
0aa1153f
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
src/transformers/trainer_seq2seq.py
src/transformers/trainer_seq2seq.py
+1
-1
No files found.
src/transformers/trainer_seq2seq.py
View file @
48706c71
...
...
@@ -277,7 +277,7 @@ class Seq2SeqTrainer(Trainer):
self
.
model
.
generation_config
.
_from_model_config
=
False
# Retrieves GenerationConfig from model.generation_config
gen_config
=
model
.
generation_config
gen_config
=
self
.
model
.
generation_config
# in case the batch is shorter than max length, the output should be padded
if
generated_tokens
.
shape
[
-
1
]
<
gen_config
.
max_length
:
generated_tokens
=
self
.
_pad_tensors_to_max_len
(
generated_tokens
,
gen_config
.
max_length
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment