Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
480799f7
"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "76800fb8e66aabfa1b31300f80fcf986befbb15f"
Unverified
Commit
480799f7
authored
Jan 05, 2023
by
Joao Gante
Committed by
GitHub
Jan 05, 2023
Browse files
Generate: post-generate config TF doctest fix (#21018)
parent
8fb4d0e4
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
6 deletions
+6
-6
src/transformers/generation/tf_utils.py
src/transformers/generation/tf_utils.py
+6
-6
No files found.
src/transformers/generation/tf_utils.py
View file @
480799f7
...
@@ -1362,7 +1362,7 @@ class TFGenerationMixin:
...
@@ -1362,7 +1362,7 @@ class TFGenerationMixin:
>>> model = TFAutoModelForCausalLM.from_pretrained("gpt2")
>>> model = TFAutoModelForCausalLM.from_pretrained("gpt2")
>>> # set pad_token_id to eos_token_id because GPT2 does not have a PAD token
>>> # set pad_token_id to eos_token_id because GPT2 does not have a PAD token
>>> model.config.pad_token_id = model.config.eos_token_id
>>> model.
generation_
config.pad_token_id = model.
generation_
config.eos_token_id
>>> input_prompt = "Today is a beautiful day, and"
>>> input_prompt = "Today is a beautiful day, and"
>>> input_ids = tokenizer(input_prompt, return_tensors="tf").input_ids
>>> input_ids = tokenizer(input_prompt, return_tensors="tf").input_ids
...
@@ -1370,7 +1370,7 @@ class TFGenerationMixin:
...
@@ -1370,7 +1370,7 @@ class TFGenerationMixin:
>>> # instantiate logits processors
>>> # instantiate logits processors
>>> logits_processor = TFLogitsProcessorList(
>>> logits_processor = TFLogitsProcessorList(
... [
... [
... TFMinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id),
... TFMinLengthLogitsProcessor(15, eos_token_id=model.
generation_
config.eos_token_id),
... ]
... ]
... )
... )
...
@@ -1629,7 +1629,7 @@ class TFGenerationMixin:
...
@@ -1629,7 +1629,7 @@ class TFGenerationMixin:
>>> model = TFAutoModelForCausalLM.from_pretrained("gpt2")
>>> model = TFAutoModelForCausalLM.from_pretrained("gpt2")
>>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token
>>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token
>>> model.config.pad_token_id = model.config.eos_token_id
>>> model.
generation_
config.pad_token_id = model.
generation_
config.eos_token_id
>>> input_prompt = "Today is a beautiful day, and"
>>> input_prompt = "Today is a beautiful day, and"
>>> input_ids = tokenizer(input_prompt, return_tensors="tf").input_ids
>>> input_ids = tokenizer(input_prompt, return_tensors="tf").input_ids
...
@@ -1637,7 +1637,7 @@ class TFGenerationMixin:
...
@@ -1637,7 +1637,7 @@ class TFGenerationMixin:
>>> # instantiate logits processors
>>> # instantiate logits processors
>>> logits_processor = TFLogitsProcessorList(
>>> logits_processor = TFLogitsProcessorList(
... [
... [
... TFMinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id),
... TFMinLengthLogitsProcessor(15, eos_token_id=model.
generation_
config.eos_token_id),
... ]
... ]
... )
... )
>>> # instantiate logits processors
>>> # instantiate logits processors
...
@@ -1947,7 +1947,7 @@ class TFGenerationMixin:
...
@@ -1947,7 +1947,7 @@ class TFGenerationMixin:
>>> num_beams = 3
>>> num_beams = 3
>>> # define decoder start token ids
>>> # define decoder start token ids
>>> input_ids = tf.ones((1, num_beams, 1), dtype=tf.int32)
>>> input_ids = tf.ones((1, num_beams, 1), dtype=tf.int32)
>>> input_ids = input_ids * model.config.decoder_start_token_id
>>> input_ids = input_ids * model.
generation_
config.decoder_start_token_id
>>> # add encoder_outputs to model keyword arguments
>>> # add encoder_outputs to model keyword arguments
>>> encoder_outputs = model.get_encoder()(encoder_input_ids, return_dict=True)
>>> encoder_outputs = model.get_encoder()(encoder_input_ids, return_dict=True)
...
@@ -1958,7 +1958,7 @@ class TFGenerationMixin:
...
@@ -1958,7 +1958,7 @@ class TFGenerationMixin:
>>> # instantiate logits processors
>>> # instantiate logits processors
>>> logits_processor = TFLogitsProcessorList(
>>> logits_processor = TFLogitsProcessorList(
... [TFMinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id)]
... [TFMinLengthLogitsProcessor(5, eos_token_id=model.
generation_
config.eos_token_id)]
... )
... )
>>> outputs = model.beam_search(input_ids, logits_processor=logits_processor, **model_kwargs)
>>> outputs = model.beam_search(input_ids, logits_processor=logits_processor, **model_kwargs)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment