Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
86c6f8a8
Unverified
Commit
86c6f8a8
authored
Mar 25, 2021
by
lexhuismans
Committed by
GitHub
Mar 25, 2021
Browse files
Fix comment (#10886)
parent
9856c921
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
0 deletions
+1
-0
src/transformers/models/t5/modeling_t5.py
src/transformers/models/t5/modeling_t5.py
+1
-0
No files found.
src/transformers/models/t5/modeling_t5.py
View file @
86c6f8a8
...
@@ -904,6 +904,7 @@ class T5Stack(T5PreTrainedModel):
...
@@ -904,6 +904,7 @@ class T5Stack(T5PreTrainedModel):
if
past_key_values
is
None
:
if
past_key_values
is
None
:
past_key_values
=
[
None
]
*
len
(
self
.
block
)
past_key_values
=
[
None
]
*
len
(
self
.
block
)
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
# ourselves in which case we just need to make it broadcastable to all heads.
extended_attention_mask
=
self
.
get_extended_attention_mask
(
attention_mask
,
input_shape
,
inputs_embeds
.
device
)
extended_attention_mask
=
self
.
get_extended_attention_mask
(
attention_mask
,
input_shape
,
inputs_embeds
.
device
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment