- 19 Apr, 2020 1 commit
-
-
Davis King authored
-
- 18 Apr, 2020 3 commits
-
-
Davis King authored
Reduce code duplication a bit and make equal_error_rate() give correct results when called on data where all detection scores are identical. Previously it would say the EER was 0, but really it should have said 1 in this case.
-
Davis King authored
-
Adrià Arrufat authored
* wip: attempt to use cuda for loss mse channel * wip: maybe this is a step in the right direction * Try to fix dereferencing the truth data (#1) * Try to fix dereferencing the truth data * Fix memory layout * fix loss scaling and update tests * rename temp1 to temp * readd lambda captures for output_width and output_height clangd was complaining about this, and suggested me to remove them in the first, place: ``` Lambda capture 'output_height' is not required to be captured for this use (fix available) Lambda capture 'output_width' is not required to be captured for this use (fix available) ``` * add a weighted_loss typedef to loss_multiclass_log_weighted_ for consistency * update docs for weighted losses * refactor multi channel loss and add cpu-cuda tests * make operator() const * make error relative to the loss value Co-authored-by:Juha Reunanen <juha.reunanen@tomaattinen.com>
-
- 14 Apr, 2020 1 commit
-
-
Davis King authored
-
- 04 Apr, 2020 1 commit
-
-
Davis King authored
-
- 03 Apr, 2020 1 commit
-
-
Adrià Arrufat authored
The thread pool was initialized after the network, so it lead to a reorder warning in GCC 9.3.0
-
- 02 Apr, 2020 1 commit
-
-
Adrià Arrufat authored
* Remove outdated comment That comment was there from when I was using a dnn_trainer to train the discriminator network. * Fix case
-
- 31 Mar, 2020 4 commits
-
-
Davis King authored
-
Adrià Arrufat authored
* fix some warnings when running tests * rever changes in CMakeLists.txt * update example make use of newly promoted method * update tests to make use of newly promoted methods
-
Adrià Arrufat authored
* remove branch from cuda kernel * promote lambda to a global function
-
Adrià Arrufat authored
-
- 29 Mar, 2020 4 commits
-
-
Davis King authored
Promote some of the sub-network methods into the add_loss_layer interface so users don't have to write .subnet() so often.
-
Davis King authored
-
Davis King authored
-
Adrià Arrufat authored
* wip: dcgan-example * wip: dcgan-example * update example to use leaky_relu and remove bias from net * wip * it works! * add more comments * add visualization code * add example documentation * rename example * fix comment * better comment format * fix the noise generator seed * add message to hit enter for image generation * fix srand, too * add std::vector overload to update_parameters * improve training stability * better naming of variables make sure it is clear we update the generator with the discriminator's gradient using fake samples and true labels * fix comment: generator -> discriminator * update leaky_relu docs to match the relu ones * replace not with ! * add Davis' suggestions to make training more stable * use tensor instead of resizable_tensor * do not use dnn_trainer for discriminator
-
- 21 Mar, 2020 1 commit
-
-
Adrià Arrufat authored
* add leaky_relu activation layer * add inplace case for leaky_relu and test_layer * make clear that alpha is not learned by leaky_relu * remove branch from cuda kernel
-
- 19 Mar, 2020 2 commits
-
-
Juha Reunanen authored
* Problem: The CUDA runtime allocates resources for each thread, and apparently those resources are not freed when the corresponding threads terminate. Therefore, each instantiation of dnn_trainer leaks a bit of GPU memory. Solution: Add possibility to pass thread pools from outside. This way, subsequent dnn_trainer instances can use the same threads, and there's no memory leak. * Add helpful comments
-
scott-vsi authored
openblasp is a parallel implementation of openblas with pthreads found on Centos/Fedora
-
- 18 Mar, 2020 1 commit
-
-
Adrià Arrufat authored
* add loss_multiclass_log_weighted * fix class name in loss_abstract * add loss_multiclass_log_weighted test * rename test function to match class name * fix typo * reuse the weighted label struct across weighted losses * do not break compatibility with loss_multiclass_log_per_pixel_weighted * actually test the loss and fix docs * fix build with gcc 9
-
- 14 Mar, 2020 1 commit
-
-
hwiesmann authored
* Prevention of compiler warning due to usage of int instead of a size type * Conversion of status type to long to prevent compiler warnings * The returned number of read items from a buffer is specified in numbers of type "streamsize" Co-authored-by:Hartwig <git@skywind.eu>
-
- 13 Mar, 2020 1 commit
-
-
Facundo Galán authored
Co-authored-by:Facundo Galan <fgalan@danaide.com.ar>
-
- 12 Mar, 2020 1 commit
-
-
scott-vsi authored
-
- 11 Mar, 2020 2 commits
-
-
Davis King authored
-
hwiesmann authored
Co-authored-by:Hartwig <git@skywind.eu>
-
- 10 Mar, 2020 1 commit
-
-
Adrià Arrufat authored
* simplify definition by reusing struct template parameter * put resnet into its own namespace * fix infer names * rename struct impl to def
-
- 29 Feb, 2020 3 commits
-
-
Davis King authored
-
Davis King authored
-
martin authored
* imglab: add support for using chinese whispers for more automatic clustering * widgets: refactor out zooming from wheel handling * tools/imglab/src/metadata_editor.cpp imglab: add keyboard shortcuts for zooming
-
- 07 Feb, 2020 2 commits
-
-
Davis King authored
-
Adrià Arrufat authored
* Add dnn_introduction3_ex
-
- 31 Jan, 2020 2 commits
-
-
Davis King authored
and the unit test servers don't even support it anymore.
-
Davis King authored
-
- 29 Jan, 2020 4 commits
-
-
Davis King authored
-
Juha Reunanen authored
-
Hye Sung Jung authored
-
Julien Schueller authored
Dlib does not use nsl symbols, why was this necessary ? This make conda-forge build fail
-
- 27 Jan, 2020 1 commit
-
-
Davis King authored
-
- 20 Jan, 2020 2 commits
-
-
Davis King authored
-
Davis King authored
-