Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
b809d2f0
"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "855ff0e91d8b3bd75a3b1c1316e2efd814373764"
Unverified
Commit
b809d2f0
authored
Apr 05, 2020
by
Patrick von Platen
Committed by
GitHub
Apr 05, 2020
Browse files
Fix TF T5 docstring (#3636)
parent
4ab8ab4f
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
src/transformers/modeling_tf_t5.py
src/transformers/modeling_tf_t5.py
+2
-2
No files found.
src/transformers/modeling_tf_t5.py
View file @
b809d2f0
...
@@ -731,7 +731,7 @@ class TFT5Model(TFT5PreTrainedModel):
...
@@ -731,7 +731,7 @@ class TFT5Model(TFT5PreTrainedModel):
tokenizer = T5Tokenizer.from_pretrained('t5-small')
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = TFT5Model.from_pretrained('t5-small')
model = TFT5Model.from_pretrained('t5-small')
input_ids = tokenizer.encode("Hello, my dog is cute", return_tensors="tf") # Batch size 1
input_ids = tokenizer.encode("Hello, my dog is cute", return_tensors="tf") # Batch size 1
outputs = model(input_ids, input_ids=input_ids)
outputs = model(input_ids,
decoder_
input_ids=input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
"""
"""
...
@@ -829,7 +829,7 @@ class TFT5ForConditionalGeneration(TFT5PreTrainedModel):
...
@@ -829,7 +829,7 @@ class TFT5ForConditionalGeneration(TFT5PreTrainedModel):
tokenizer = T5Tokenizer.from_pretrained('t5-small')
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = TFT5ForConditionalGeneration.from_pretrained('t5-small')
model = TFT5ForConditionalGeneration.from_pretrained('t5-small')
input_ids = tokenizer.encode("Hello, my dog is cute", return_tensors="tf") # Batch size 1
input_ids = tokenizer.encode("Hello, my dog is cute", return_tensors="tf") # Batch size 1
outputs = model(input_ids, input_ids=input_ids)
outputs = model(input_ids,
decoder_
input_ids=input_ids)
prediction_scores = outputs[0]
prediction_scores = outputs[0]
tokenizer = T5Tokenizer.from_pretrained('t5-small')
tokenizer = T5Tokenizer.from_pretrained('t5-small')
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment