Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
3566596a
Unverified
Commit
3566596a
authored
Nov 09, 2023
by
Antony Frolov
Committed by
GitHub
Nov 09, 2023
Browse files
Fix typo in RotaryEmbedding forward output type (#666)
parent
83aef842
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
flash_attn/layers/rotary.py
flash_attn/layers/rotary.py
+1
-1
No files found.
flash_attn/layers/rotary.py
View file @
3566596a
...
...
@@ -417,7 +417,7 @@ class RotaryEmbedding(torch.nn.Module):
kv
:
Optional
[
torch
.
Tensor
]
=
None
,
seqlen_offset
:
Union
[
int
,
torch
.
Tensor
]
=
0
,
max_seqlen
:
Optional
[
int
]
=
None
,
)
->
Tuple
[
torch
.
Tensor
,
torch
.
Tensor
]:
)
->
Union
[
torch
.
Tensor
,
Tuple
[
torch
.
Tensor
,
torch
.
Tensor
]
]
:
"""
qkv: (batch, seqlen, 3, nheads, headdim) if kv is none,
else it's just q of shape (batch, seqlen, nheads, headdim)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment