- 08 Aug, 2017 1 commit
-
-
Davis King authored
-
- 16 Jul, 2017 1 commit
-
-
Juha Reunanen authored
* Add per-pixel mean square loss * Add documentation of loss_mean_squared_per_pixel_ * Add test case for per-pixel mean square loss: a simple autoencoder * Review fix: reorder params of function tensor_index, so that the order corresponds to the convention used in the rest of the dlib code base * Review fix: add breaks as intended, and change the rest of the test accordingly * Again a case where the tests already work locally for me, but not on AppVeyor/Travis - this commit is a blindfolded attempt to fix the problem (and it also fixes a compiler warning)
-
- 08 Jul, 2017 1 commit
-
-
Davis King authored
without to make sure both types of lookup work.
-
- 07 Jul, 2017 1 commit
-
-
Juha Reunanen authored
* Add new loss for weighted pixel inputs (may be useful e.g. to emphasize rare classes) * Deduplicate method loss_multiclass_log_per_pixel_(weighted_)::to_label * Add a simple test case for weighted inputs (also, fix a typo in test_tensor_resize_bilienar's name) * Add loss_multiclass_log_per_pixel_weighted_ to loss_abstract.h * Decrease the amount of weighting * There's no need to train for a very long time
-
- 04 Jul, 2017 2 commits
-
-
Davis King authored
-
Davis King authored
-
- 03 Jul, 2017 1 commit
-
-
Juha Reunanen authored
* Problem: Visual Studio's vcpkgsrv.exe constantly uses a single CPU core, apparently never finishing whatever it's trying to do. Moreover, this issue prevents some operations like switching from Debug to Release (and vice versa) in the IDE. (Your mileage may vary.) Workaround: Keep manually killing the vcpkgsrv.exe process. Solution: Disable IntelliSense for some files. Which files? Unfortunately this seems to be a trial-and-error process. * Disable IntelliSense for the ResNet declarations * Disable IntelliSense for even more stuff * Disable IntelliSense for all DNN unit tests
-
- 01 Jul, 2017 2 commits
-
-
Davis King authored
few minor things.
-
Juha Reunanen authored
* #288 - add new layer loss_multiclass_log_matrixoutput for semantic-segmentation purposes * In semantic segmentation, add capability to ignore individual pixels when computing gradients * In semantic segmentation, 65535 classes ought to be enough for anybody * Divide matrix output loss by matrix dimensions too, in order to make losses related to differently sized matrices more comparable - note that this affects the required learning rate as well! * Review fix: avoid matrix copy * Review fix: rename to loss_multiclass_log_per_pixel * Review fix: just use uint16_t as the label type * Add more tests: check that network params and outputs are correct * Improve error message when output and truth matrix dimensions do not match * Add test case verifying that a single call of loss_multiclass_log_per_pixel equals multiple corresponding calls of loss_multiclass_log * Fix test failure by training longer * Remove the test case that fails on Travis for some reason, even though it works on AppVeyor and locally
-
- 27 Jun, 2017 1 commit
-
-
Davis King authored
reallocation and copying inside conv_'s backward pass. Doing this required adding an add_to_output boolean option to the methods of tensor_conv.
-
- 22 Jun, 2017 1 commit
-
-
OranjeeGeneral authored
refactored interface to reduce complexity so conv and convt layers forward passes have to call setup explicit now and there is only one ()-operator
-
- 21 Apr, 2017 1 commit
-
-
Davis King authored
-
- 02 Apr, 2017 1 commit
-
-
Davis King authored
rather than the entire tensor.
-
- 16 Mar, 2017 1 commit
-
-
Joachim authored
fixed backward pass in cont layer to accumulate gradients this will pass the layer test now also removed compile warnings and changed some comments
-
- 13 Mar, 2017 1 commit
-
-
Joachim authored
-
- 19 Feb, 2017 1 commit
-
-
Davis King authored
-
- 06 Feb, 2017 1 commit
-
-
Dennis Francis authored
* feature_addition : Mean squared loss layer for multiple output (#404) * Added loss_mean_squared_multioutput layer to support multiple outputs. * Also added a corresponding test case to test a single variable regression with multiple outputs. * Added error checks on truth argument Added assert statements to check that truth argument in compute_loss_value_and_gradient() method contains matrices of correct dimension relative to the output tensor's size. Also the requirements on argument truth to the abstract documentation.
-
- 26 Nov, 2016 1 commit
-
-
Dennis Francis authored
-
- 25 Nov, 2016 1 commit
-
-
Dennis Francis authored
-
- 23 Nov, 2016 1 commit
-
-
Dennis Francis authored
Added mean squared loss layer "loss_mean_squared" to DNN as requested in https://github.com/davisking/dlib/issues/152 Also added test case of a simple linear regression with one variable that uses this layer.
-
- 18 Nov, 2016 1 commit
-
-
Davis King authored
-
- 02 Nov, 2016 1 commit
-
-
Davis King authored
versions were calling into cuDNN, however, the cuDNN functions for doing this are horrifically slow, well over 100x slower than they should be, which is surprising since these functions are so trivial.
-
- 23 Oct, 2016 1 commit
-
-
Davis King authored
-
- 27 Aug, 2016 2 commits
-
-
Davis King authored
-
Davis King authored
alias tensors. Now any kind of tensors are supported.
-
- 12 Aug, 2016 1 commit
-
-
Davis King authored
cudnnGetConvolutionBackwardFilterAlgorithm() to pick invalid algorithms, resulting in cuDNN not working correctly.
-
- 06 Aug, 2016 1 commit
-
-
Davis King authored
-
- 11 Jun, 2016 1 commit
-
-
Davis King authored
automatically sizes the tensor.
-
- 01 Jun, 2016 1 commit
-
-
Davis King authored
-
- 27 May, 2016 2 commits
- 26 May, 2016 2 commits
-
-
Evgeniy Fominov authored
-
Fm authored
-
- 25 May, 2016 1 commit
-
-
Davis King authored
-
- 23 May, 2016 1 commit
-
-
Davis King authored
-
- 22 May, 2016 3 commits
-
-
Davis King authored
caused by num_computational_layers being wrong when tax layers were placed as the first layer. These visit functions being wrong also caused multi-GPU support to not work on such networks.
-
Davis King authored
-
Davis King authored
-
- 14 May, 2016 1 commit
-
-
Davis King authored
skip layers and add_prev style layers. In particular, now in-place layers only overwrite the gradient information in their child layer if they are operating in in-place mode. Otherwise, they add their gradients to their child layers. It should also be noted that it's safe for in-place layers to overwrite gradients when in in-place mode since their child layers are inaccessible when in-place layers operate in in-place mode. This prevents any other layers from trying to add to the child layer, thereby avoiding the potability of layer interference. So the bug this change fixes is that, when not in in-place mode the child layers are still accessible but in-place layers were *still* overwriting child gradients.
-
- 05 May, 2016 1 commit
-
-
Davis King authored
-