1. 29 Mar, 2020 3 commits
    • Davis King's avatar
      c79f64f5
    • Davis King's avatar
      fd014534
    • Adrià Arrufat's avatar
      Add DCGAN example (#2035) · f42f100d
      Adrià Arrufat authored
      * wip: dcgan-example
      
      * wip: dcgan-example
      
      * update example to use leaky_relu and remove bias from net
      
      * wip
      
      * it works!
      
      * add more comments
      
      * add visualization code
      
      * add example documentation
      
      * rename example
      
      * fix comment
      
      * better comment format
      
      * fix the noise generator seed
      
      * add message to hit enter for image generation
      
      * fix srand, too
      
      * add std::vector overload to update_parameters
      
      * improve training stability
      
      * better naming of variables
      
      make sure it is clear we update the generator with the discriminator's
      gradient using fake samples and true labels
      
      * fix comment: generator -> discriminator
      
      * update leaky_relu docs to match the relu ones
      
      * replace not with !
      
      * add Davis' suggestions to make training more stable
      
      * use tensor instead of resizable_tensor
      
      * do not use dnn_trainer for discriminator
      f42f100d
  2. 21 Mar, 2020 1 commit
    • Adrià Arrufat's avatar
      add leaky_relu activation layer (#2033) · d610e56c
      Adrià Arrufat authored
      * add leaky_relu activation layer
      
      * add inplace case for leaky_relu and test_layer
      
      * make clear that alpha is not learned by leaky_relu
      
      * remove branch from cuda kernel
      d610e56c
  3. 19 Mar, 2020 2 commits
    • Juha Reunanen's avatar
      To avoid a GPU memory leak, allow passing thread pools to dnn_trainer from outside (#2027) · 74123841
      Juha Reunanen authored
      * Problem: The CUDA runtime allocates resources for each thread, and apparently those resources are not freed when the corresponding threads terminate. Therefore, each instantiation of dnn_trainer leaks a bit of GPU memory.
      
      Solution: Add possibility to pass thread pools from outside. This way, subsequent dnn_trainer instances can use the same threads, and there's no memory leak.
      
      * Add helpful comments
      74123841
    • scott-vsi's avatar
      link against openblasp (#2028) · 6fc503d2
      scott-vsi authored
      openblasp is a parallel implementation of openblas with pthreads found on Centos/Fedora
      6fc503d2
  4. 18 Mar, 2020 1 commit
    • Adrià Arrufat's avatar
      add loss multiclass log weighted (#2022) · 1380e6b9
      Adrià Arrufat authored
      * add loss_multiclass_log_weighted
      
      * fix class name in loss_abstract
      
      * add loss_multiclass_log_weighted test
      
      * rename test function to match class name
      
      * fix typo
      
      * reuse the weighted label struct across weighted losses
      
      * do not break compatibility with loss_multiclass_log_per_pixel_weighted
      
      * actually test the loss and fix docs
      
      * fix build with gcc 9
      1380e6b9
  5. 14 Mar, 2020 1 commit
  6. 13 Mar, 2020 1 commit
  7. 12 Mar, 2020 1 commit
  8. 11 Mar, 2020 2 commits
  9. 10 Mar, 2020 1 commit
  10. 29 Feb, 2020 3 commits
  11. 07 Feb, 2020 2 commits
  12. 31 Jan, 2020 2 commits
  13. 29 Jan, 2020 4 commits
  14. 27 Jan, 2020 1 commit
  15. 20 Jan, 2020 4 commits
  16. 18 Jan, 2020 5 commits
  17. 17 Jan, 2020 1 commit
  18. 15 Jan, 2020 5 commits