Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
f456b4d1
Unverified
Commit
f456b4d1
authored
Aug 09, 2023
by
Joao Gante
Committed by
GitHub
Aug 09, 2023
Browse files
Generate: generation config validation fixes in docs (#25405)
parent
00b93cda
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
1 addition
and
7 deletions
+1
-7
docs/source/en/model_doc/donut.md
docs/source/en/model_doc/donut.md
+0
-6
src/transformers/generation/configuration_utils.py
src/transformers/generation/configuration_utils.py
+1
-1
No files found.
docs/source/en/model_doc/donut.md
View file @
f456b4d1
...
@@ -80,11 +80,9 @@ into a single instance to both extract the input features and decode the predict
...
@@ -80,11 +80,9 @@ into a single instance to both extract the input features and decode the predict
...
pixel_values
.
to
(
device
),
...
pixel_values
.
to
(
device
),
...
decoder_input_ids
=
decoder_input_ids
.
to
(
device
),
...
decoder_input_ids
=
decoder_input_ids
.
to
(
device
),
...
max_length
=
model
.
decoder
.
config
.
max_position_embeddings
,
...
max_length
=
model
.
decoder
.
config
.
max_position_embeddings
,
...
early_stopping
=
True
,
...
pad_token_id
=
processor
.
tokenizer
.
pad_token_id
,
...
pad_token_id
=
processor
.
tokenizer
.
pad_token_id
,
...
eos_token_id
=
processor
.
tokenizer
.
eos_token_id
,
...
eos_token_id
=
processor
.
tokenizer
.
eos_token_id
,
...
use_cache
=
True
,
...
use_cache
=
True
,
...
num_beams
=
1
,
...
bad_words_ids
=
[[
processor
.
tokenizer
.
unk_token_id
]],
...
bad_words_ids
=
[[
processor
.
tokenizer
.
unk_token_id
]],
...
return_dict_in_generate
=
True
,
...
return_dict_in_generate
=
True
,
...
)
...
)
...
@@ -125,11 +123,9 @@ into a single instance to both extract the input features and decode the predict
...
@@ -125,11 +123,9 @@ into a single instance to both extract the input features and decode the predict
...
pixel_values
.
to
(
device
),
...
pixel_values
.
to
(
device
),
...
decoder_input_ids
=
decoder_input_ids
.
to
(
device
),
...
decoder_input_ids
=
decoder_input_ids
.
to
(
device
),
...
max_length
=
model
.
decoder
.
config
.
max_position_embeddings
,
...
max_length
=
model
.
decoder
.
config
.
max_position_embeddings
,
...
early_stopping
=
True
,
...
pad_token_id
=
processor
.
tokenizer
.
pad_token_id
,
...
pad_token_id
=
processor
.
tokenizer
.
pad_token_id
,
...
eos_token_id
=
processor
.
tokenizer
.
eos_token_id
,
...
eos_token_id
=
processor
.
tokenizer
.
eos_token_id
,
...
use_cache
=
True
,
...
use_cache
=
True
,
...
num_beams
=
1
,
...
bad_words_ids
=
[[
processor
.
tokenizer
.
unk_token_id
]],
...
bad_words_ids
=
[[
processor
.
tokenizer
.
unk_token_id
]],
...
return_dict_in_generate
=
True
,
...
return_dict_in_generate
=
True
,
...
)
...
)
...
@@ -172,11 +168,9 @@ into a single instance to both extract the input features and decode the predict
...
@@ -172,11 +168,9 @@ into a single instance to both extract the input features and decode the predict
...
pixel_values
.
to
(
device
),
...
pixel_values
.
to
(
device
),
...
decoder_input_ids
=
decoder_input_ids
.
to
(
device
),
...
decoder_input_ids
=
decoder_input_ids
.
to
(
device
),
...
max_length
=
model
.
decoder
.
config
.
max_position_embeddings
,
...
max_length
=
model
.
decoder
.
config
.
max_position_embeddings
,
...
early_stopping
=
True
,
...
pad_token_id
=
processor
.
tokenizer
.
pad_token_id
,
...
pad_token_id
=
processor
.
tokenizer
.
pad_token_id
,
...
eos_token_id
=
processor
.
tokenizer
.
eos_token_id
,
...
eos_token_id
=
processor
.
tokenizer
.
eos_token_id
,
...
use_cache
=
True
,
...
use_cache
=
True
,
...
num_beams
=
1
,
...
bad_words_ids
=
[[
processor
.
tokenizer
.
unk_token_id
]],
...
bad_words_ids
=
[[
processor
.
tokenizer
.
unk_token_id
]],
...
return_dict_in_generate
=
True
,
...
return_dict_in_generate
=
True
,
...
)
...
)
...
...
src/transformers/generation/configuration_utils.py
View file @
f456b4d1
...
@@ -597,7 +597,7 @@ class GenerationConfig(PushToHubMixin):
...
@@ -597,7 +597,7 @@ class GenerationConfig(PushToHubMixin):
>>> # If you'd like to try a minor variation to an existing configuration, you can also pass generation
>>> # If you'd like to try a minor variation to an existing configuration, you can also pass generation
>>> # arguments to `.from_pretrained()`. Be mindful that typos and unused arguments will be ignored
>>> # arguments to `.from_pretrained()`. Be mindful that typos and unused arguments will be ignored
>>> generation_config, unused_kwargs = GenerationConfig.from_pretrained(
>>> generation_config, unused_kwargs = GenerationConfig.from_pretrained(
... "gpt2", top_k=1, foo=False, return_unused_kwargs=True
... "gpt2", top_k=1, foo=False,
do_sample=True,
return_unused_kwargs=True
... )
... )
>>> generation_config.top_k
>>> generation_config.top_k
1
1
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment