1. 14 Nov, 2021 1 commit
  2. 06 Nov, 2021 2 commits
    • Adrià Arrufat's avatar
      Replace sgd-based fc classifier with svm_multiclass_linear_trainer (#2452) · 5091e9c8
      Adrià Arrufat authored
      
      
      * Replace fc classifier with svm_multiclass_linear_trainer
      
      * Mention about find_max_global()
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      
      * Use double instead of float for extracted features
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      
      * fix compilation with double features
      
      * Revert "fix compilation with double features"
      
      This reverts commit 76ebab4b91ed31d2332206fe8de092043c0f687f.
      
      * Revert "Use double instead of float for extracted features"
      
      This reverts commit 9a50809ebf0f420e72a3c2b4b856dc1a71b9c6b3.
      
      * Find best C using global optimization
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      5091e9c8
    • pfeatherstone's avatar
      [INVOKE] C++11 backport of std::invoke, std::invoke_result, std:apply and... · f77189db
      pfeatherstone authored
      
      [INVOKE] C++11 backport of std::invoke, std::invoke_result, std:apply and std::make_from_tuple (#2450)
      
      * added backport of std::invoke, std::invoke_result and std::apply
      
      * added backport of std::invoke, std::invoke_result and std::apply
      
      * msvc doesn't like keyword 'not'
      
      * i think this fixes detection of invoke on MSVC
      
      * ok, i think detection of invoke stuff is fixed on windows
      
      * - just have dlib's own implementation and don't use standard library even if c++17 is enabled.
      - added tests for dlib::invoke_result_t
      
      * added docs
      
      * - added dlib::make_from_tuple
      - added tests + docs
      
      * - make sure you use the dlib:: namespace. Otherwise, when compiling with C++17, compiler might get confused
      - use remove_reference instead of decay. That's what the standard says to use
      
      * added dlib::is_invocable
      
      * - defined invoke_traits. This removes dupplicate code.
      - This makes absolutely no difference but is just a tiny bit nicer.
      
      * removed the test that could potentially fail with MSVC
      Co-authored-by: default avatarpfeatherstone <peter@me>
      f77189db
  3. 02 Nov, 2021 2 commits
  4. 30 Oct, 2021 2 commits
    • Davis King's avatar
      We have some excessive and duplicative tests in the travis-ci setup. · a41b3d7c
      Davis King authored
      This is causing us to run out of travis-ci credits, making tests not run
      at all.  I deleted the duplicative tests and then disabled two additonal
      ones by commenting them out that would be nice to run but I think are
      not essential.  In particular, the OSX one eats up a ton of credits.  So
      I disabled that.  Maybe we can turn it back on later if we end up well
      under the credit budget (or switch to github actions which appears to
      have higher limits)
      a41b3d7c
    • Adrià Arrufat's avatar
      Add dnn self supervised learning example (#2434) · 2e8bac19
      Adrià Arrufat authored
      
      
      * wip: loss goes down when training without a dnn_trainer
      
      if I use a dnn_trainer, it segfaults (also with bigger batch sizes...)
      
      * remove commented code
      
      * fix gradient computation (hopefully)
      
      * fix loss computation
      
      * fix crash in input_rgb_image_pair::to_tensor
      
      * fix alias tensor offset
      
      * refactor loss and input layers and complete the example
      
      * add more data augmentation
      
      * add documentation
      
      * add documentation
      
      * small fix in the gradient computation and reuse terms
      
      * fix warning in comment
      
      * use tensor_tools instead of matrix to compute the gradients
      
      * complete the example program
      
      * add support for mult-gpu
      
      * Update dlib/dnn/input_abstract.h
      
      * Update dlib/dnn/input_abstract.h
      
      * Update dlib/dnn/loss_abstract.h
      
      * Update examples/dnn_self_supervised_learning_ex.cpp
      
      * Update examples/dnn_self_supervised_learning_ex.cpp
      
      * Update examples/dnn_self_supervised_learning_ex.cpp
      
      * Update examples/dnn_self_supervised_learning_ex.cpp
      
      * [TYPE_SAFE_UNION] upgrade (#2443)
      
      * [TYPE_SAFE_UNION] upgrade
      
      * MSVC doesn't like keyword not
      
      * MSVC doesn't like keyword and
      
      * added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types
      
      * - didn't need is_void anymore
      - added result_of_t
      - didn't really need ostream_helper or istream_helper
      - split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible)
      
      * - updated abstract file
      
      * - added get_type_t
      - removed deserialize_helper dupplicate
      - don't use std::decay_t, that's c++14
      
      * - removed white spaces
      - don't need a return-statement when calling apply_to_contents_impl()
      - use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum
      
      * - added type_safe_union_size
      - added type_safe_union_size_v if C++14 is available
      - added tests for above
      
      * - test type_safe_union_size_v
      
      * testing nested unions with visitors.
      
      * re-added comment
      
      * added index() in abstract file
      
      * - refactored reset() to clear()
      - added comment about clear() in abstract file
      - in deserialize(), only reset the object if necessary
      
      * - removed unecessary comment about exceptions
      - removed unecessary // -------------
      - struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured!
      - get_type and get_type_t are private. Client code shouldn't need this.
      - shuffled some functions around
      - type_safe_union_size and type_safe_union_size_v are removed. not needed
      - reset() -> clear()
      - bug fix in deserialize() index counts from 1, not 0
      - improved the abstract file
      
      * refactored index() to get_current_type_id() as per suggestion
      
      * maybe slightly improved docs
      
      * - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works!
      - apply_to_contents() now always calls visit()
      
      * example with private visitor using friendship with non-void return types.
      
      * Fix up contracts
      
      It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide.  Making it a precondition.
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments
      - helper_copy -> helper_forward
      - added validate_type<T> in a couple of places
      
      * - helper_move only takes non-const lvalue references. So we are not using std::move with universal references !
      - use enable_if<is_valid<T>> in favor of validate_type<T>()
      
      * - use enable_if<is_valid<T>> in favor of validate_type<T>()
      
      * - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust
      Co-authored-by: default avatarpfeatherstone <peter@me>
      Co-authored-by: default avatarpf <pf@me>
      Co-authored-by: default avatarDavis E. King <davis685@gmail.com>
      
      * Just minor cleanup of docs and renamed some stuff, tweaked formatting.
      
      * fix spelling error
      
      * fix most vexing parse error
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      Co-authored-by: default avatarpfeatherstone <45853521+pfeatherstone@users.noreply.github.com>
      Co-authored-by: default avatarpfeatherstone <peter@me>
      Co-authored-by: default avatarpf <pf@me>
      Co-authored-by: default avatarDavis E. King <davis685@gmail.com>
      2e8bac19
  5. 29 Oct, 2021 1 commit
  6. 28 Oct, 2021 2 commits
    • Davis King's avatar
    • pfeatherstone's avatar
      [TYPE_SAFE_UNION] upgrade (#2443) · 2b8f9e40
      pfeatherstone authored
      
      
      * [TYPE_SAFE_UNION] upgrade
      
      * MSVC doesn't like keyword not
      
      * MSVC doesn't like keyword and
      
      * added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types
      
      * - didn't need is_void anymore
      - added result_of_t
      - didn't really need ostream_helper or istream_helper
      - split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible)
      
      * - updated abstract file
      
      * - added get_type_t
      - removed deserialize_helper dupplicate
      - don't use std::decay_t, that's c++14
      
      * - removed white spaces
      - don't need a return-statement when calling apply_to_contents_impl()
      - use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum
      
      * - added type_safe_union_size
      - added type_safe_union_size_v if C++14 is available
      - added tests for above
      
      * - test type_safe_union_size_v
      
      * testing nested unions with visitors.
      
      * re-added comment
      
      * added index() in abstract file
      
      * - refactored reset() to clear()
      - added comment about clear() in abstract file
      - in deserialize(), only reset the object if necessary
      
      * - removed unecessary comment about exceptions
      - removed unecessary // -------------
      - struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured!
      - get_type and get_type_t are private. Client code shouldn't need this.
      - shuffled some functions around
      - type_safe_union_size and type_safe_union_size_v are removed. not needed
      - reset() -> clear()
      - bug fix in deserialize() index counts from 1, not 0
      - improved the abstract file
      
      * refactored index() to get_current_type_id() as per suggestion
      
      * maybe slightly improved docs
      
      * - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works!
      - apply_to_contents() now always calls visit()
      
      * example with private visitor using friendship with non-void return types.
      
      * Fix up contracts
      
      It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide.  Making it a precondition.
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments
      - helper_copy -> helper_forward
      - added validate_type<T> in a couple of places
      
      * - helper_move only takes non-const lvalue references. So we are not using std::move with universal references !
      - use enable_if<is_valid<T>> in favor of validate_type<T>()
      
      * - use enable_if<is_valid<T>> in favor of validate_type<T>()
      
      * - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust
      Co-authored-by: default avatarpfeatherstone <peter@me>
      Co-authored-by: default avatarpf <pf@me>
      Co-authored-by: default avatarDavis E. King <davis685@gmail.com>
      2b8f9e40
  7. 13 Oct, 2021 1 commit
  8. 11 Oct, 2021 1 commit
    • Adrià Arrufat's avatar
      Add support for fused convolutions (#2294) · adca7472
      Adrià Arrufat authored
      * add helper methods to implement fused convolutions
      
      * fix grammar
      
      * add method to disable affine layer and updated serialization
      
      * add documentation for .disable()
      
      * add fuse_convolutions visitor and documentation
      
      * update docs: net is not constant
      
      * fix xml formatting and use std::boolalpha
      
      * fix warning and updated net requirement for visitor
      
      * fix segfault in fuse_convolutions visitor
      
      * copy unconditionally
      
      * make the visitor class a friend of the con_ class
      
      * setup the biases alias tensor after enabling bias
      
      * simplify visitor a bit
      
      * fix comment
      
      * setup the biases size, somehow this got lost
      
      * copy the parameters before resizing
      
      * remove enable_bias() method, since the visitor is now a friend
      
      * Revert "remove enable_bias() method, since the visitor is now a friend"
      
      This reverts commit 35b92b16316f19a7f1f1b1313c9ab874f4d6199b.
      
      * update the visitor to remove the friend requirement
      
      * improve behavior of enable_bias
      
      * better describe the behavior of enable_bias
      
      * wip: use cudnncudnnConvolutionBiasActivationForward when activation has bias
      
      * wip: fix cpu compilation
      
      * WIP: not working fused ReLU
      
      * WIP: forgot do disable ReLU in visitor (does not change the fact that it does not work)
      
      * WIP: more general set of 4d tensor (still not working)
      
      * fused convolutions seem to be working now, more testing needed
      
      * move visitor to the bottom of the file
      
      * fix CPU-side and code clean up
      
      * Do not try to fuse the activation layers
      
      Fusing the activation layers in one cuDNN call is only supported when using
      the cuDNN ones (ReLU, Sigmoid, TanH...) which might lead to suprising
      behavior. So, let's just fuse the batch norm and the convolution into one
      cuDNN call using the IDENTITY activation function.
      
      * Set the correct forward algorithm for the identity activation
      
      Ref: https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnConvolutionBiasActivationForward
      
      
      
      * move the affine alias template to its original position
      
      * wip
      
      * remove unused param in relu and simplify example (I will delete it before merge)
      
      * simplify conv bias logic and fix deserialization issue
      
      * fix enabling bias on convolutions
      
      * remove test example
      
      * fix typo
      
      * update documentation
      
      * update documentation
      
      * remove ccache leftovers from CMakeLists.txt
      
      * Re-add new line
      
      * fix enable/disable bias on unallocated networks
      
      * update comment to mention cudnnConvolutionBiasActivationForward
      
      * fix typo
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      
      * Apply documentation suggestions from code review
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      
      * update affine docs to talk in terms of gamma and beta
      
      * simplify tensor_conv interface
      
      * fix tensor_conv operator() with biases
      
      * add fuse_layers test
      
      * add an example on how to use the fuse_layers function
      
      * fix typo
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      adca7472
  9. 27 Sep, 2021 1 commit
    • Adrià Arrufat's avatar
      Fix trainer with unsupervised loss (#2436) · 8a2c7442
      Adrià Arrufat authored
      * Don't try to use labels in unsupervised losses
      
      I hope that is the right way of fixing this...
      
      * fix it by duplicating most code in send_job (works on my machine)
      
      I will probably need to find a way to reuse the code
      
      * try to fix it reusing the code... not sure though
      
      * Revert "try to fix it reusing the code... not sure though"
      
      This reverts commit f308cac6df712da3619fb05b14f3345f0ec07b9a.
      
      * check the type of the training label to fix the issue instead
      8a2c7442
  10. 25 Sep, 2021 2 commits
  11. 23 Sep, 2021 3 commits
  12. 15 Sep, 2021 1 commit
  13. 13 Sep, 2021 1 commit
  14. 10 Sep, 2021 1 commit
  15. 19 Aug, 2021 2 commits
  16. 15 Aug, 2021 1 commit
  17. 14 Aug, 2021 2 commits
  18. 06 Aug, 2021 1 commit
  19. 05 Aug, 2021 8 commits
  20. 04 Aug, 2021 1 commit
  21. 30 Jul, 2021 1 commit
  22. 27 Jul, 2021 1 commit
  23. 22 Jul, 2021 1 commit
  24. 16 Jul, 2021 1 commit