"tests/test_tokenization_realm.py" did not exist on "c89bdfbe720bc8f41c7dc6db5473a2cb0955f224"
Unverified Commit 5eeaef92 authored by Alex McKinney's avatar Alex McKinney Committed by GitHub
Browse files

Adds `TRANSFORMERS_TEST_BACKEND` (#25655)

* Adds `TRANSFORMERS_TEST_BACKEND`
Allows specifying arbitrary additional import following first `import torch`.
This is useful for some custom backends, that will require additional imports to trigger backend registration with upstream torch.
See https://github.com/pytorch/benchmark/pull/1805

 for a similar change in `torchbench`.

* Update src/transformers/testing_utils.py
Co-authored-by: default avatarYih-Dar <2521628+ydshieh@users.noreply.github.com>

* Adds real backend example to documentation

---------
Co-authored-by: default avatarYih-Dar <2521628+ydshieh@users.noreply.github.com>
parent fd56f7f0
......@@ -511,15 +511,20 @@ from transformers.testing_utils import get_gpu_count
n_gpu = get_gpu_count() # works with torch and tf
```
### Testing with a specific PyTorch backend
### Testing with a specific PyTorch backend or device
To run the test suite on a specific torch backend add `TRANSFORMERS_TEST_DEVICE="$device"` where `$device` is the target backend. For example, to test on CPU only:
To run the test suite on a specific torch device add `TRANSFORMERS_TEST_DEVICE="$device"` where `$device` is the target backend. For example, to test on CPU only:
```bash
TRANSFORMERS_TEST_DEVICE="cpu" pytest tests/test_logging.py
```
This variable is useful for testing custom or less common PyTorch backends such as `mps`. It can also be used to achieve the same effect as `CUDA_VISIBLE_DEVICES` by targeting specific GPUs or testing in CPU-only mode.
Certain devices will require an additional import after importing `torch` for the first time. This can be specified using the environment variable `TRANSFORMERS_TEST_BACKEND`:
```bash
TRANSFORMERS_TEST_BACKEND="torch_npu" pytest tests/test_logging.py
```
### Distributed training
......
......@@ -16,6 +16,7 @@ import collections
import contextlib
import doctest
import functools
import importlib
import inspect
import logging
import multiprocessing
......@@ -642,6 +643,17 @@ if is_torch_available():
torch_device = "npu"
else:
torch_device = "cpu"
if "TRANSFORMERS_TEST_BACKEND" in os.environ:
backend = os.environ["TRANSFORMERS_TEST_BACKEND"]
try:
_ = importlib.import_module(backend)
except ModuleNotFoundError as e:
raise ModuleNotFoundError(
f"Failed to import `TRANSFORMERS_TEST_BACKEND` '{backend}'! This should be the name of an installed module. The original error (look up to see its"
f" traceback):\n{e}"
) from e
else:
torch_device = None
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment