Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
jerrrrry
infinicore
Commits
e00e65e2
Commit
e00e65e2
authored
Nov 22, 2025
by
zhuyue
Browse files
Issue/658 - Update test tolerances and remove device-specific dtype filters.
parent
3aeee034
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
3 additions
and
6 deletions
+3
-6
test/infinicore/ops/matmul.py
test/infinicore/ops/matmul.py
+1
-1
test/infiniop/libinfiniop/utils.py
test/infiniop/libinfiniop/utils.py
+1
-4
test/infiniop/silu.py
test/infiniop/silu.py
+1
-1
No files found.
test/infinicore/ops/matmul.py
View file @
e00e65e2
...
...
@@ -33,7 +33,7 @@ _TEST_CASES_DATA = [
# Tolerance configuration
_TOLERANCE_MAP
=
{
infinicore
.
float16
:
{
"atol"
:
0
,
"rtol"
:
1e-2
},
infinicore
.
float32
:
{
"atol"
:
0
,
"rtol"
:
1e-3
},
infinicore
.
float32
:
{
"atol"
:
1e-4
,
"rtol"
:
1e-3
},
infinicore
.
bfloat16
:
{
"atol"
:
0
,
"rtol"
:
5e-2
},
}
...
...
test/infiniop/libinfiniop/utils.py
View file @
e00e65e2
...
...
@@ -463,11 +463,8 @@ def debug(actual, desired, atol=0, rtol=1e-2, equal_nan=False, verbose=True):
def
filter_tensor_dtypes_by_device
(
device
,
tensor_dtypes
):
if
device
in
(
InfiniDeviceEnum
.
CPU
,
InfiniDeviceEnum
.
NVIDIA
):
if
device
in
(
InfiniDeviceEnum
.
CPU
,
InfiniDeviceEnum
.
NVIDIA
,
InfiniDeviceEnum
.
METAX
,
InfiniDeviceEnum
.
ASCEND
,
InfiniDeviceEnum
.
ILUVATAR
,
InfiniDeviceEnum
.
CAMBRICON
):
return
tensor_dtypes
elif
device
==
InfiniDeviceEnum
.
MOORE
:
# 过滤掉 BF16 和 F64(PyTorch 在摩尔平台上不支持这些类型的某些操作)
return
[
dt
for
dt
in
tensor_dtypes
if
dt
!=
InfiniDtype
.
BF16
and
dt
!=
InfiniDtype
.
F64
]
else
:
# 过滤掉 torch.bfloat16
return
[
dt
for
dt
in
tensor_dtypes
if
dt
!=
torch
.
bfloat16
]
...
...
test/infiniop/silu.py
View file @
e00e65e2
...
...
@@ -57,7 +57,7 @@ _TEST_CASES = [
]
# Data types used for testing
_TENSOR_DTYPES
=
[
InfiniDtype
.
BF16
,
InfiniDtype
.
F16
,
InfiniDtype
.
F32
,
InfiniDtype
.
F64
]
_TENSOR_DTYPES
=
[
InfiniDtype
.
BF16
,
InfiniDtype
.
F16
,
InfiniDtype
.
F32
]
# Tolerance map for different data types
_TOLERANCE_MAP
=
{
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment