Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
a8fec99a
Commit
a8fec99a
authored
Nov 13, 2022
by
Tri Dao
Browse files
Skip flash_attn_split test
parent
9d3116ad
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
0 deletions
+1
-0
tests/test_flash_attn.py
tests/test_flash_attn.py
+1
-0
No files found.
tests/test_flash_attn.py
View file @
a8fec99a
...
...
@@ -625,6 +625,7 @@ def test_flash_attn_unpadded(seqlen, d, dropout_p, causal, dtype):
# assert torch.allclose(dv, dv_ref, rtol=rtol, atol=atol)
@
pytest
.
mark
.
skipif
(
True
,
reason
=
'Experimental, not being used'
)
@
pytest
.
mark
.
parametrize
(
'dtype'
,
([
torch
.
float16
]
if
is_sm75
else
[
torch
.
float16
,
torch
.
bfloat16
]))
# @pytest.mark.parametrize('dtype', [torch.float16])
@
pytest
.
mark
.
parametrize
(
'causal'
,
[
False
,
True
])
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment