Commit 05d597fa authored by DanilBaibak's avatar DanilBaibak Committed by Facebook GitHub Bot
Browse files

Updated USE_ROCM detection (#3008)

Summary:
We don't need the presence of physical HW to compile with CUDA.

This is a follow up PR regarding `USE_ROCM` for issue https://github.com/pytorch/audio/issues/2979.

Pull Request resolved: https://github.com/pytorch/audio/pull/3008

Reviewed By: malfet

Differential Revision: D42708862

Pulled By: DanilBaibak

fbshipit-source-id: 90cedc80a2d180ca1e0912ad5b644398182417b8
parent 98b3ac17
......@@ -38,7 +38,7 @@ _BUILD_KALDI = False if platform.system() == "Windows" else _get_build("BUILD_KA
_BUILD_RNNT = _get_build("BUILD_RNNT", True)
_BUILD_CTC_DECODER = _get_build("BUILD_CTC_DECODER", True)
_USE_FFMPEG = _get_build("USE_FFMPEG", False)
_USE_ROCM = _get_build("USE_ROCM", torch.cuda.is_available() and torch.version.hip is not None)
_USE_ROCM = _get_build("USE_ROCM", torch.backends.cuda.is_built() and torch.version.hip is not None)
_USE_CUDA = _get_build("USE_CUDA", torch.backends.cuda.is_built() and torch.version.hip is None)
_USE_OPENMP = _get_build("USE_OPENMP", True) and "ATen parallel backend: OpenMP" in torch.__config__.parallel_info()
_TORCH_CUDA_ARCH_LIST = os.environ.get("TORCH_CUDA_ARCH_LIST", None)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment