Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
vision
Commits
562b8463
"docs/git@developer.sourcefind.cn:OpenDAS/torchaudio.git" did not exist on "c6a376cc5679c1940e49fc3e0ba22eaead6c2467"
Unverified
Commit
562b8463
authored
Jun 16, 2021
by
Nicolas Hug
Committed by
GitHub
Jun 16, 2021
Browse files
[FBcode->GH] Allow all torchvision test rules to run with RE (#4073)
parent
f4ab3e7e
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
25 additions
and
2 deletions
+25
-2
test/conftest.py
test/conftest.py
+25
-2
No files found.
test/conftest.py
View file @
562b8463
...
@@ -14,12 +14,19 @@ def pytest_configure(config):
...
@@ -14,12 +14,19 @@ def pytest_configure(config):
def
pytest_collection_modifyitems
(
items
):
def
pytest_collection_modifyitems
(
items
):
# This hook is called by pytest after it has collected the tests (google its name!)
# This hook is called by pytest after it has collected the tests (google its name
to check out its doc
!)
# We can ignore some tests as we see fit here, or add marks, such as a skip mark.
# We can ignore some tests as we see fit here, or add marks, such as a skip mark.
#
# Typically here, we try to optimize CI time. In particular, the GPU CI instances don't need to run the
# tests that don't need CUDA, because those tests are extensively tested in the CPU CI instances already.
# This is true for both CircleCI and the fbcode internal CI.
# In the fbcode CI, we have an additional constraint: we try to avoid skipping tests. So instead of relying on
# pytest.mark.skip, in fbcode we literally just remove those tests from the `items` list, and it's as if
# these tests never existed.
out_items
=
[]
out_items
=
[]
for
item
in
items
:
for
item
in
items
:
# The needs_cuda mark will exist if the test was explicit
e
ly decorated with
# The needs_cuda mark will exist if the test was explicitly decorated with
# the @needs_cuda decorator. It will also exist if it was parametrized with a
# the @needs_cuda decorator. It will also exist if it was parametrized with a
# parameter that has the mark: for example if a test is parametrized with
# parameter that has the mark: for example if a test is parametrized with
# @pytest.mark.parametrize('device', cpu_and_gpu())
# @pytest.mark.parametrize('device', cpu_and_gpu())
...
@@ -57,3 +64,19 @@ def pytest_collection_modifyitems(items):
...
@@ -57,3 +64,19 @@ def pytest_collection_modifyitems(items):
out_items
.
append
(
item
)
out_items
.
append
(
item
)
items
[:]
=
out_items
items
[:]
=
out_items
def
pytest_sessionfinish
(
session
,
exitstatus
):
# This hook is called after all tests have run, and just before returning an exit status.
# We here change exit code 5 into 0.
#
# 5 is issued when no tests were actually run, e.g. if you use `pytest -k some_regex_that_is_never_matched`.
#
# Having no test being run for a given test rule is a common scenario in fbcode, and typically happens on
# the GPU test machines which don't run the CPU-only tests (see pytest_collection_modifyitems above). For
# example `test_transforms.py` doesn't contain any CUDA test at the time of
# writing, so on a GPU test machine, testpilot would invoke pytest on this file and no test would be run.
# This would result in pytest returning 5, causing testpilot to raise an error.
# To avoid this, we transform this 5 into a 0 to make testpilot happy.
if
exitstatus
==
5
:
session
.
exitstatus
=
0
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment