Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
04f46a22
Unverified
Commit
04f46a22
authored
Jun 28, 2023
by
MS Kim(tony9402)
Committed by
GitHub
Jun 27, 2023
Browse files
Fix Typo (#24530)
* Fix Typo * Fix all copies
parent
462f77cb
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
5 additions
and
5 deletions
+5
-5
src/transformers/models/bart/modeling_bart.py
src/transformers/models/bart/modeling_bart.py
+1
-1
src/transformers/models/blenderbot_small/modeling_blenderbot_small.py
...mers/models/blenderbot_small/modeling_blenderbot_small.py
+1
-1
src/transformers/models/marian/modeling_marian.py
src/transformers/models/marian/modeling_marian.py
+1
-1
src/transformers/models/plbart/modeling_plbart.py
src/transformers/models/plbart/modeling_plbart.py
+1
-1
src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
...me_series_transformer/modeling_time_series_transformer.py
+1
-1
No files found.
src/transformers/models/bart/modeling_bart.py
View file @
04f46a22
...
...
@@ -319,7 +319,7 @@ class BartEncoderLayer(nn.Module):
)
->
Tuple
[
torch
.
FloatTensor
,
Optional
[
torch
.
FloatTensor
]]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len
, batch
, embed_dim)`
hidden_states (`torch.FloatTensor`): input to the layer of shape `(
batch,
seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size
...
...
src/transformers/models/blenderbot_small/modeling_blenderbot_small.py
View file @
04f46a22
...
...
@@ -304,7 +304,7 @@ class BlenderbotSmallEncoderLayer(nn.Module):
)
->
Tuple
[
torch
.
FloatTensor
,
Optional
[
torch
.
FloatTensor
]]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len
, batch
, embed_dim)`
hidden_states (`torch.FloatTensor`): input to the layer of shape `(
batch,
seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size
...
...
src/transformers/models/marian/modeling_marian.py
View file @
04f46a22
...
...
@@ -322,7 +322,7 @@ class MarianEncoderLayer(nn.Module):
)
->
Tuple
[
torch
.
FloatTensor
,
Optional
[
torch
.
FloatTensor
]]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len
, batch
, embed_dim)`
hidden_states (`torch.FloatTensor`): input to the layer of shape `(
batch,
seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size
...
...
src/transformers/models/plbart/modeling_plbart.py
View file @
04f46a22
...
...
@@ -315,7 +315,7 @@ class PLBartEncoderLayer(nn.Module):
)
->
Tuple
[
torch
.
FloatTensor
,
Optional
[
torch
.
FloatTensor
]]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len
, batch
, embed_dim)`
hidden_states (`torch.FloatTensor`): input to the layer of shape `(
batch,
seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size
...
...
src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
View file @
04f46a22
...
...
@@ -484,7 +484,7 @@ class TimeSeriesTransformerEncoderLayer(nn.Module):
)
->
Tuple
[
torch
.
FloatTensor
,
Optional
[
torch
.
FloatTensor
]]:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len
, batch
, embed_dim)`
hidden_states (`torch.FloatTensor`): input to the layer of shape `(
batch,
seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment