Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
3a2ed967
Unverified
Commit
3a2ed967
authored
Feb 10, 2022
by
NielsRogge
Committed by
GitHub
Feb 10, 2022
Browse files
Fix Seq2SeqTrainer (#15603)
Co-authored-by:
Niels Rogge
<
nielsrogge@Nielss-MBP.localdomain
>
parent
724e51c6
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
1 deletion
+3
-1
src/transformers/trainer_seq2seq.py
src/transformers/trainer_seq2seq.py
+3
-1
No files found.
src/transformers/trainer_seq2seq.py
View file @
3a2ed967
...
...
@@ -161,6 +161,9 @@ class Seq2SeqTrainer(Trainer):
"synced_gpus"
:
True
if
is_deepspeed_zero3_enabled
()
else
False
,
}
if
"attention_mask"
in
inputs
:
gen_kwargs
[
"attention_mask"
]
=
inputs
.
get
(
"attention_mask"
,
None
)
# prepare generation inputs
# some encoder-decoder models can have varying encder's and thus
# varying model input names
...
...
@@ -171,7 +174,6 @@ class Seq2SeqTrainer(Trainer):
generated_tokens
=
self
.
model
.
generate
(
generation_inputs
,
attention_mask
=
inputs
.
get
(
"attention_mask"
,
None
),
**
gen_kwargs
,
)
# in case the batch is shorter than max length, the output should be padded
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment