To avoid a GPU memory leak, allow passing thread pools to dnn_trainer from outside (#2027)
* Problem: The CUDA runtime allocates resources for each thread, and apparently those resources are not freed when the corresponding threads terminate. Therefore, each instantiation of dnn_trainer leaks a bit of GPU memory. Solution: Add possibility to pass thread pools from outside. This way, subsequent dnn_trainer instances can use the same threads, and there's no memory leak. * Add helpful comments
Showing
Please register or sign in to comment