Prevent excessive parallelism in PyTorch.
We're already using as many processes in parallel as we have CPU cores. Furthermore, the number of core may be incorrectly calculated as 36 (we've seen this in pytest-xdist) which make compound the problem. PyTorch performance craters without this.
Showing
Please register or sign in to comment