1. 19 Mar, 2020 2 commits
    • Juha Reunanen's avatar
      To avoid a GPU memory leak, allow passing thread pools to dnn_trainer from outside (#2027) · 74123841
      Juha Reunanen authored
      * Problem: The CUDA runtime allocates resources for each thread, and apparently those resources are not freed when the corresponding threads terminate. Therefore, each instantiation of dnn_trainer leaks a bit of GPU memory.
      
      Solution: Add possibility to pass thread pools from outside. This way, subsequent dnn_trainer instances can use the same threads, and there's no memory leak.
      
      * Add helpful comments
      74123841
    • scott-vsi's avatar
      link against openblasp (#2028) · 6fc503d2
      scott-vsi authored
      openblasp is a parallel implementation of openblas with pthreads found on Centos/Fedora
      6fc503d2
  2. 18 Mar, 2020 1 commit
    • Adrià Arrufat's avatar
      add loss multiclass log weighted (#2022) · 1380e6b9
      Adrià Arrufat authored
      * add loss_multiclass_log_weighted
      
      * fix class name in loss_abstract
      
      * add loss_multiclass_log_weighted test
      
      * rename test function to match class name
      
      * fix typo
      
      * reuse the weighted label struct across weighted losses
      
      * do not break compatibility with loss_multiclass_log_per_pixel_weighted
      
      * actually test the loss and fix docs
      
      * fix build with gcc 9
      1380e6b9
  3. 14 Mar, 2020 1 commit
  4. 13 Mar, 2020 1 commit
  5. 12 Mar, 2020 1 commit
  6. 11 Mar, 2020 2 commits
  7. 10 Mar, 2020 1 commit
  8. 29 Feb, 2020 3 commits
  9. 07 Feb, 2020 2 commits
  10. 31 Jan, 2020 2 commits
  11. 29 Jan, 2020 4 commits
  12. 27 Jan, 2020 1 commit
  13. 20 Jan, 2020 4 commits
  14. 18 Jan, 2020 5 commits
  15. 17 Jan, 2020 1 commit
  16. 15 Jan, 2020 6 commits
  17. 13 Jan, 2020 3 commits