- 17 Nov, 2021 1 commit
-
-
Adrià Arrufat authored
-
- 06 Nov, 2021 1 commit
-
-
Adrià Arrufat authored
* Replace fc classifier with svm_multiclass_linear_trainer * Mention about find_max_global() Co-authored-by:
Davis E. King <davis@dlib.net> * Use double instead of float for extracted features Co-authored-by:
Davis E. King <davis@dlib.net> * fix compilation with double features * Revert "fix compilation with double features" This reverts commit 76ebab4b91ed31d2332206fe8de092043c0f687f. * Revert "Use double instead of float for extracted features" This reverts commit 9a50809ebf0f420e72a3c2b4b856dc1a71b9c6b3. * Find best C using global optimization Co-authored-by:
Davis E. King <davis@dlib.net>
-
- 30 Oct, 2021 1 commit
-
-
Adrià Arrufat authored
* wip: loss goes down when training without a dnn_trainer if I use a dnn_trainer, it segfaults (also with bigger batch sizes...) * remove commented code * fix gradient computation (hopefully) * fix loss computation * fix crash in input_rgb_image_pair::to_tensor * fix alias tensor offset * refactor loss and input layers and complete the example * add more data augmentation * add documentation * add documentation * small fix in the gradient computation and reuse terms * fix warning in comment * use tensor_tools instead of matrix to compute the gradients * complete the example program * add support for mult-gpu * Update dlib/dnn/input_abstract.h * Update dlib/dnn/input_abstract.h * Update dlib/dnn/loss_abstract.h * Update examples/dnn_self_supervised_learning_ex.cpp * Update examples/dnn_self_supervised_learning_ex.cpp * Update examples/dnn_self_supervised_learning_ex.cpp * Update examples/dnn_self_supervised_learning_ex.cpp * [TYPE_SAFE_UNION] upgrade (#2443) * [TYPE_SAFE_UNION] upgrade * MSVC doesn't like keyword not * MSVC doesn't like keyword and * added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types * - didn't need is_void anymore - added result_of_t - didn't really need ostream_helper or istream_helper - split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible) * - updated abstract file * - added get_type_t - removed deserialize_helper dupplicate - don't use std::decay_t, that's c++14 * - removed white spaces - don't need a return-statement when calling apply_to_contents_impl() - use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum * - added type_safe_union_size - added type_safe_union_size_v if C++14 is available - added tests for above * - test type_safe_union_size_v * testing nested unions with visitors. * re-added comment * added index() in abstract file * - refactored reset() to clear() - added comment about clear() in abstract file - in deserialize(), only reset the object if necessary * - removed unecessary comment about exceptions - removed unecessary // ------------- - struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured! - get_type and get_type_t are private. Client code shouldn't need this. - shuffled some functions around - type_safe_union_size and type_safe_union_size_v are removed. not needed - reset() -> clear() - bug fix in deserialize() index counts from 1, not 0 - improved the abstract file * refactored index() to get_current_type_id() as per suggestion * maybe slightly improved docs * - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works! - apply_to_contents() now always calls visit() * example with private visitor using friendship with non-void return types. * Fix up contracts It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide. Making it a precondition. * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h * - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments - helper_copy -> helper_forward - added validate_type<T> in a couple of places * - helper_move only takes non-const lvalue references. So we are not using std::move with universal references ! - use enable_if<is_valid<T>> in favor of validate_type<T>() * - use enable_if<is_valid<T>> in favor of validate_type<T>() * - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust Co-authored-by:
pfeatherstone <peter@me> Co-authored-by:
pf <pf@me> Co-authored-by:
Davis E. King <davis685@gmail.com> * Just minor cleanup of docs and renamed some stuff, tweaked formatting. * fix spelling error * fix most vexing parse error Co-authored-by:
Davis E. King <davis@dlib.net> Co-authored-by:
pfeatherstone <45853521+pfeatherstone@users.noreply.github.com> Co-authored-by:
pfeatherstone <peter@me> Co-authored-by:
pf <pf@me> Co-authored-by:
Davis E. King <davis685@gmail.com>
-
- 11 Oct, 2021 1 commit
-
-
Adrià Arrufat authored
* add helper methods to implement fused convolutions * fix grammar * add method to disable affine layer and updated serialization * add documentation for .disable() * add fuse_convolutions visitor and documentation * update docs: net is not constant * fix xml formatting and use std::boolalpha * fix warning and updated net requirement for visitor * fix segfault in fuse_convolutions visitor * copy unconditionally * make the visitor class a friend of the con_ class * setup the biases alias tensor after enabling bias * simplify visitor a bit * fix comment * setup the biases size, somehow this got lost * copy the parameters before resizing * remove enable_bias() method, since the visitor is now a friend * Revert "remove enable_bias() method, since the visitor is now a friend" This reverts commit 35b92b16316f19a7f1f1b1313c9ab874f4d6199b. * update the visitor to remove the friend requirement * improve behavior of enable_bias * better describe the behavior of enable_bias * wip: use cudnncudnnConvolutionBiasActivationForward when activation has bias * wip: fix cpu compilation * WIP: not working fused ReLU * WIP: forgot do disable ReLU in visitor (does not change the fact that it does not work) * WIP: more general set of 4d tensor (still not working) * fused convolutions seem to be working now, more testing needed * move visitor to the bottom of the file * fix CPU-side and code clean up * Do not try to fuse the activation layers Fusing the activation layers in one cuDNN call is only supported when using the cuDNN ones (ReLU, Sigmoid, TanH...) which might lead to suprising behavior. So, let's just fuse the batch norm and the convolution into one cuDNN call using the IDENTITY activation function. * Set the correct forward algorithm for the identity activation Ref: https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnConvolutionBiasActivationForward * move the affine alias template to its original position * wip * remove unused param in relu and simplify example (I will delete it before merge) * simplify conv bias logic and fix deserialization issue * fix enabling bias on convolutions * remove test example * fix typo * update documentation * update documentation * remove ccache leftovers from CMakeLists.txt * Re-add new line * fix enable/disable bias on unallocated networks * update comment to mention cudnnConvolutionBiasActivationForward * fix typo Co-authored-by:
Davis E. King <davis@dlib.net> * Apply documentation suggestions from code review Co-authored-by:
Davis E. King <davis@dlib.net> * update affine docs to talk in terms of gamma and beta * simplify tensor_conv interface * fix tensor_conv operator() with biases * add fuse_layers test * add an example on how to use the fuse_layers function * fix typo Co-authored-by:
Davis E. King <davis@dlib.net>
-
- 15 Sep, 2021 1 commit
-
-
Jakub Mareda authored
* Missing include for `dlib::loss_multiclass_log_per_pixel_::label_to_ignore` I was trying to compile the examples and encountered this issue after moving `rgb_label_image_to_index_label_image` to cpp file. Headers should include all symbols they mention. * Update pascal_voc_2012.h Should use the official entrypoint for including dnn stuff. Co-authored-by:Davis E. King <davis685@gmail.com>
-
- 30 Jul, 2021 1 commit
-
-
Adrià Arrufat authored
-
- 09 Dec, 2020 1 commit
-
-
Abdolkarim Saeedi authored
Simple typo in the inception training
-
- 25 Nov, 2020 1 commit
-
-
Adrià Arrufat authored
* Rename function to disable_duplicative_biases * rename also the functions in the tests... oops
-
- 20 Oct, 2020 1 commit
-
-
Adrià Arrufat authored
* wip: layer normalization on cpu * wip: add cuda implementation, nor working yet * wip: try to fix cuda implementation * swap grid_strid_range and grid_strid_range_y: does not work yet * fix CUDA implementation * implement cuda gradient * add documentation, move layer_norm, update bn_visitor * add tests * use stddev instead of variance in test (they are both 1, anyway) * add test for means and invstds on CPU and CUDA * rename visitor to disable_duplicative_bias * handle more cases in the visitor_disable_input_bias * Add tests for visitor_disable_input_bias
-
- 06 Sep, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 05 Sep, 2020 1 commit
-
-
Davis King authored
Now the user doesn't have to supply a visitor capable of visiting all layers, but instead just the ones they are interested in. Also added visit_computational_layers() and visit_computational_layers_range() since those capture a very common use case more concisely than visit_layers(). That is, users generally want to mess with the computational layers specifically as those are the stateful layers.
-
- 03 Sep, 2020 1 commit
-
-
Adrià Arrufat authored
* add visitor to remove bias from bn_ inputs (#closes 2155) * remove unused parameter and make documentation more clear * remove bias from bn_ layers too and use better name * let the batch norm keep their bias, use even better name * be more consistent with impl naming * remove default constructor * do not use method to prevent some errors * add disable bias method to pertinent layers * update dcgan example - grammar - print number of network parameters to be able to check bias is not allocated - at the end, give feedback to the user about what the discriminator thinks about each generated sample * fix fc_ logic * add documentation * add bias_is_disabled methods and update to_xml * print use_bias=false when bias is disabled
-
- 27 Apr, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 22 Apr, 2020 1 commit
-
-
ncoder-1 authored
Changed C-style cast to static_cast.
-
- 04 Apr, 2020 1 commit
-
-
Davis King authored
-
- 02 Apr, 2020 1 commit
-
-
Adrià Arrufat authored
* Remove outdated comment That comment was there from when I was using a dnn_trainer to train the discriminator network. * Fix case
-
- 31 Mar, 2020 2 commits
-
-
Adrià Arrufat authored
* fix some warnings when running tests * rever changes in CMakeLists.txt * update example make use of newly promoted method * update tests to make use of newly promoted methods
-
Adrià Arrufat authored
-
- 29 Mar, 2020 2 commits
-
-
Davis King authored
Promote some of the sub-network methods into the add_loss_layer interface so users don't have to write .subnet() so often.
-
Adrià Arrufat authored
* wip: dcgan-example * wip: dcgan-example * update example to use leaky_relu and remove bias from net * wip * it works! * add more comments * add visualization code * add example documentation * rename example * fix comment * better comment format * fix the noise generator seed * add message to hit enter for image generation * fix srand, too * add std::vector overload to update_parameters * improve training stability * better naming of variables make sure it is clear we update the generator with the discriminator's gradient using fake samples and true labels * fix comment: generator -> discriminator * update leaky_relu docs to match the relu ones * replace not with ! * add Davis' suggestions to make training more stable * use tensor instead of resizable_tensor * do not use dnn_trainer for discriminator
-
- 10 Mar, 2020 1 commit
-
-
Adrià Arrufat authored
* simplify definition by reusing struct template parameter * put resnet into its own namespace * fix infer names * rename struct impl to def
-
- 07 Feb, 2020 2 commits
-
-
Davis King authored
-
Adrià Arrufat authored
* Add dnn_introduction3_ex
-
- 20 Jan, 2020 1 commit
-
-
Juha Reunanen authored
* Add new loss layer for binary loss per pixel
-
- 28 Nov, 2019 1 commit
-
-
Davis King authored
-
- 15 Nov, 2019 1 commit
-
-
Juha Reunanen authored
* Add instance segmentation example - first version of training code * Add MMOD options; get rid of the cache approach, and instead load all MMOD rects upfront * Improve console output * Set filter count * Minor tweaking * Inference - first version, at least compiles! * Ignore overlapped boxes * Ignore even small instances * Set overlaps_ignore * Add TODO remarks * Revert "Set overlaps_ignore" This reverts commit 65adeff1f89af62b10c691e7aa86c04fc358d03e. * Set result size * Set label image size * Take ignore-color into account * Fix the cropping rect's aspect ratio; also slightly expand the rect * Draw the largest findings last * Improve masking of the current instance * Add some perturbation to the inputs * Simplify ground-truth reading; fix random cropping * Read even class labels * Tweak default minibatch size * Learn only one class * Really train only instances of the selected class * Remove outdated TODO remark * Automatically skip images with no detections * Print to console what was found * Fix class index problem * Fix indentation * Allow to choose multiple classes * Draw rect in the color of the corresponding class * Write detector window classes to ostream; also group detection windows by class (when ostreaming) * Train a separate instance segmentation network for each classlabel * Use separate synchronization file for each seg net of each class * Allow more overlap * Fix sorting criterion * Fix interpolating the predicted mask * Improve bilinear interpolation: if output type is an integer, round instead of truncating * Add helpful comments * Ignore large aspect ratios; refactor the code; tweak some network parameters * Simplify the segmentation network structure; make the object detection network more complex in turn * Problem: CUDA errors not reported properly to console Solution: stop and join data loader threads even in case of exceptions * Minor parameters tweaking * Loss may have increased, even if prob_loss_increasing_thresh > prob_loss_increasing_thresh_max_value * Add previous_loss_values_dump_amount to previous_loss_values.size() when deciding if loss has been increasing * Improve behaviour when loss actually increased after disk sync * Revert some of the earlier change * Disregard dumped loss values only when deciding if learning rate should be shrunk, but *not* when deciding if loss has been going up since last disk sync * Revert "Revert some of the earlier change" This reverts commit 6c852124efe6473a5c962de0091709129d6fcde3. * Keep enough previous loss values, until the disk sync * Fix maintaining the dumped (now "effectively disregarded") loss values count * Detect cats instead of aeroplanes * Add helpful logging * Clarify the intention and the code * Review fixes * Add operator== for the other pixel types as well; remove the inline * If available, use constexpr if * Revert "If available, use constexpr if" This reverts commit 503d4dd3355ff8ad613116e3ffcc0fa664674f69. * Simplify code as per review comments * Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh * Clarify console output * Revert "Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh" This reverts commit 9191ebc7762d17d81cdfc334a80ca9a667365740. * To keep the changes to a bare minimum, revert the steps_since_last_learning_rate_shrink change after all (at least for now) * Even empty out some of the previous test loss values * Minor review fixes * Can't use C++14 features here * Do not use the struct name as a variable name
-
- 25 Oct, 2019 1 commit
-
-
Davis King authored
-
- 24 Oct, 2019 1 commit
-
-
Davis King authored
-
- 27 Jul, 2019 1 commit
-
-
Davis King authored
-
- 04 Mar, 2019 1 commit
-
-
Davis King authored
-
- 06 Jan, 2019 1 commit
-
-
Juha Reunanen authored
* Add concat_prev layer, and U-net example for semantic segmentation * Allow to supply mini-batch size as command-line parameter * Decrease default mini-batch size from 30 to 24 * Resize t1, if needed * Use DenseNet-style blocks instead of residual learning * Increase default mini-batch size to 50 * Increase default mini-batch size from 50 to 60 * Resize even during the backward step, if needed * Use resize_bilinear_gradient for the backward step * Fix function call ambiguity problem * Clear destination before adding gradient * Works OK-ish * Add more U-tags * Tweak default mini-batch size * Define a simpler network when using Microsoft Visual C++ compiler; clean up the DenseNet stuff (leaving it for a later PR) * Decrease default mini-batch size from 24 to 23 * Define separate dnn filename for MSVC++ and not * Add documentation for the resize_to_prev layer; move the implementation so that it comes after mult_prev * Fix previous typo * Minor formatting changes * Reverse the ordering of levels * Increase the learning-rate stopping criterion back to 1e-4 (was 1e-8) * Use more U-tags even on Windows * Minor formatting * Latest MSVC 2017 builds fast, so there's no need to limit the depth any longer * Tweak default mini-batch size again * Even though latest MSVC can now build the extra layers, it does not mean we should add them! * Fix naming
-
- 01 Mar, 2018 1 commit
-
-
Juha Reunanen authored
* Problem: integer overflow when calculating sizes (may happen e.g. with very large images) Solution: change some types from (unsigned) long to size_t # Conflicts: # dlib/dnn/tensor.h * Fix the fact that std::numeric_limits<unsigned long>::max() isn't always the same number * Revert serialization changes * Review fix: use long long instead of size_t * From long to long long all the way * Change more types to (hopefully) make the compiler happy * Change many more types to size_t * Change even more types to size_t * Minor type changes
-
- 25 Dec, 2017 2 commits
-
-
Davis King authored
-
Duc Thien Bui authored
-
- 18 Dec, 2017 1 commit
-
-
Davis King authored
-
- 17 Dec, 2017 2 commits
-
-
Davis King authored
the code, but it helps visual studio use less RAM when building the example, and might make appveyor not crash. It's also a slightly cleaner way to write the code anyway.
-
Davis King authored
-
- 11 Dec, 2017 1 commit
-
-
Davis King authored
-
- 08 Dec, 2017 1 commit
-
-
visionworkz authored
* Exposed jitter_image in Python and added an example * Return Numpy array directly * Require numpy during setup * Added install of Numpy before builds * Changed pip install for user only due to security issues. * Removed malloc * Made presence of Numpy during compile optional. * Conflict * Refactored get_face_chip/get_face_chips to use Numpy as well.
-
- 02 Dec, 2017 1 commit
-
-
Davis King authored
-