Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dgl
Commits
364cb718
Unverified
Commit
364cb718
authored
Feb 21, 2024
by
Andrei Ivanov
Committed by
GitHub
Feb 21, 2024
Browse files
Skip test when atomic operations are not supported on GPU. (#7117)
parent
938deec8
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
15 additions
and
15 deletions
+15
-15
tests/python/common/ops/test_ops.py
tests/python/common/ops/test_ops.py
+15
-15
No files found.
tests/python/common/ops/test_ops.py
View file @
364cb718
...
@@ -407,22 +407,22 @@ def test_segment_mm(idtype, feat_size, dtype, tol):
...
@@ -407,22 +407,22 @@ def test_segment_mm(idtype, feat_size, dtype, tol):
def
test_gather_mm_idx_b
(
feat_size
,
dtype
,
tol
):
def
test_gather_mm_idx_b
(
feat_size
,
dtype
,
tol
):
if
F
.
_default_context_str
==
"cpu"
and
dtype
==
torch
.
float16
:
if
F
.
_default_context_str
==
"cpu"
and
dtype
==
torch
.
float16
:
pytest
.
skip
(
"float16 is not supported on CPU."
)
pytest
.
skip
(
"float16 is not supported on CPU."
)
if
(
F
.
_default_context_str
==
"gpu"
and
dtype
==
torch
.
bfloat16
and
not
torch
.
cuda
.
is_bf16_supported
()
):
pytest
.
skip
(
"BF16 is not supported."
)
if
(
if
F
.
_default_context_str
==
"gpu"
:
F
.
_default_context_str
==
"gpu"
if
dtype
==
torch
.
bfloat16
and
not
torch
.
cuda
.
is_bf16_supported
():
and
dtype
==
torch
.
float16
pytest
.
skip
(
"BF16 is not supported."
)
and
torch
.
cuda
.
get_device_capability
()
<
(
7
,
0
)
):
if
(
pytest
.
skip
(
dtype
==
torch
.
float16
f
"FP16 is not supported for atomic operations on GPU with "
and
torch
.
cuda
.
get_device_capability
()
<
(
7
,
0
)
f
"cuda capability (
{
torch
.
cuda
.
get_device_capability
()
}
)."
)
or
(
)
dtype
==
torch
.
bfloat16
and
torch
.
cuda
.
get_device_capability
()
<
(
8
,
0
)
):
pytest
.
skip
(
f
"
{
dtype
}
is not supported for atomic operations on GPU with "
f
"cuda capability (
{
torch
.
cuda
.
get_device_capability
()
}
)."
)
dev
=
F
.
ctx
()
dev
=
F
.
ctx
()
# input
# input
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment