Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
e16cbe88
Unverified
Commit
e16cbe88
authored
Mar 13, 2023
by
Joao Gante
Committed by
GitHub
Mar 13, 2023
Browse files
Trainer: let generate pick its inputs (#22108)
* Let generate pick its inputs * fix squad seq2seq example
parent
d979cf6e
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
4 additions
and
16 deletions
+4
-16
src/transformers/trainer_seq2seq.py
src/transformers/trainer_seq2seq.py
+4
-16
No files found.
src/transformers/trainer_seq2seq.py
View file @
e16cbe88
...
@@ -182,23 +182,11 @@ class Seq2SeqTrainer(Trainer):
...
@@ -182,23 +182,11 @@ class Seq2SeqTrainer(Trainer):
gen_kwargs
[
"synced_gpus"
]
if
gen_kwargs
.
get
(
"synced_gpus"
)
is
not
None
else
default_synced_gpus
gen_kwargs
[
"synced_gpus"
]
if
gen_kwargs
.
get
(
"synced_gpus"
)
is
not
None
else
default_synced_gpus
)
)
if
"attention_mask"
in
inputs
:
# TODO (Joao): the following line is needed to keep a consistent result on SQUAD. Ideally, we should not block
gen_kwargs
[
"attention_mask"
]
=
inputs
.
get
(
"attention_mask"
,
None
)
# users from preparing a dataset with `decoder_input_ids`.
if
"global_attention_mask"
in
inputs
:
inputs
=
{
k
:
v
for
k
,
v
in
inputs
.
items
()
if
k
!=
"decoder_input_ids"
}
gen_kwargs
[
"global_attention_mask"
]
=
inputs
.
get
(
"global_attention_mask"
,
None
)
generated_tokens
=
self
.
model
.
generate
(
**
inputs
,
**
gen_kwargs
)
# prepare generation inputs
# some encoder-decoder models can have varying encoder's and thus
# varying model input names
if
hasattr
(
self
.
model
,
"encoder"
)
and
self
.
model
.
encoder
.
main_input_name
!=
self
.
model
.
main_input_name
:
generation_inputs
=
inputs
[
self
.
model
.
encoder
.
main_input_name
]
else
:
generation_inputs
=
inputs
[
self
.
model
.
main_input_name
]
generated_tokens
=
self
.
model
.
generate
(
generation_inputs
,
**
gen_kwargs
,
)
# Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop
# Temporary hack to ensure the generation config is not initialized for each iteration of the evaluation loop
# TODO: remove this hack when the legacy code that initializes generation_config from a model config is
# TODO: remove this hack when the legacy code that initializes generation_config from a model config is
# removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183
# removed in https://github.com/huggingface/transformers/blob/98d88b23f54e5a23e741833f1e973fdf600cc2c5/src/transformers/generation/utils.py#L1183
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment