1. 30 Oct, 2021 1 commit
    • Adrià Arrufat's avatar
      Add dnn self supervised learning example (#2434) · 2e8bac19
      Adrià Arrufat authored
      
      
      * wip: loss goes down when training without a dnn_trainer
      
      if I use a dnn_trainer, it segfaults (also with bigger batch sizes...)
      
      * remove commented code
      
      * fix gradient computation (hopefully)
      
      * fix loss computation
      
      * fix crash in input_rgb_image_pair::to_tensor
      
      * fix alias tensor offset
      
      * refactor loss and input layers and complete the example
      
      * add more data augmentation
      
      * add documentation
      
      * add documentation
      
      * small fix in the gradient computation and reuse terms
      
      * fix warning in comment
      
      * use tensor_tools instead of matrix to compute the gradients
      
      * complete the example program
      
      * add support for mult-gpu
      
      * Update dlib/dnn/input_abstract.h
      
      * Update dlib/dnn/input_abstract.h
      
      * Update dlib/dnn/loss_abstract.h
      
      * Update examples/dnn_self_supervised_learning_ex.cpp
      
      * Update examples/dnn_self_supervised_learning_ex.cpp
      
      * Update examples/dnn_self_supervised_learning_ex.cpp
      
      * Update examples/dnn_self_supervised_learning_ex.cpp
      
      * [TYPE_SAFE_UNION] upgrade (#2443)
      
      * [TYPE_SAFE_UNION] upgrade
      
      * MSVC doesn't like keyword not
      
      * MSVC doesn't like keyword and
      
      * added tests for emplate(), copy semantics, move semantics, swap, overloaded and apply_to_contents with non void return types
      
      * - didn't need is_void anymore
      - added result_of_t
      - didn't really need ostream_helper or istream_helper
      - split apply_to_contents into apply_to_contents (return void) and visit (return anything so long as visitor is publicly accessible)
      
      * - updated abstract file
      
      * - added get_type_t
      - removed deserialize_helper dupplicate
      - don't use std::decay_t, that's c++14
      
      * - removed white spaces
      - don't need a return-statement when calling apply_to_contents_impl()
      - use unchecked_get() whenever possible to minimise explicit use of pointer casting. lets keep that to a minimum
      
      * - added type_safe_union_size
      - added type_safe_union_size_v if C++14 is available
      - added tests for above
      
      * - test type_safe_union_size_v
      
      * testing nested unions with visitors.
      
      * re-added comment
      
      * added index() in abstract file
      
      * - refactored reset() to clear()
      - added comment about clear() in abstract file
      - in deserialize(), only reset the object if necessary
      
      * - removed unecessary comment about exceptions
      - removed unecessary // -------------
      - struct is_valid is not mentioned in abstract. Instead rather requiring T to be a valid type, it is ensured!
      - get_type and get_type_t are private. Client code shouldn't need this.
      - shuffled some functions around
      - type_safe_union_size and type_safe_union_size_v are removed. not needed
      - reset() -> clear()
      - bug fix in deserialize() index counts from 1, not 0
      - improved the abstract file
      
      * refactored index() to get_current_type_id() as per suggestion
      
      * maybe slightly improved docs
      
      * - HURRAY, don't need std::result_of or std::invoke_result for visit() to work. Just privately define your own type trait, in this case called return_type and return_type_t. it works!
      - apply_to_contents() now always calls visit()
      
      * example with private visitor using friendship with non-void return types.
      
      * Fix up contracts
      
      It can't be a post condition that T is a valid type, since the choice of T is up to the caller, it's not something these functions decide.  Making it a precondition.
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * Update dlib/type_safe_union/type_safe_union_kernel_abstract.h
      
      * - added more tests for copy constructors/assignments, move constructors/assignments, and converting constructors/assignments
      - helper_copy -> helper_forward
      - added validate_type<T> in a couple of places
      
      * - helper_move only takes non-const lvalue references. So we are not using std::move with universal references !
      - use enable_if<is_valid<T>> in favor of validate_type<T>()
      
      * - use enable_if<is_valid<T>> in favor of validate_type<T>()
      
      * - added is_valid_check<>. This wraps enable_if<is_valid<T>,bool> and makes use of SFINAE more robust
      Co-authored-by: default avatarpfeatherstone <peter@me>
      Co-authored-by: default avatarpf <pf@me>
      Co-authored-by: default avatarDavis E. King <davis685@gmail.com>
      
      * Just minor cleanup of docs and renamed some stuff, tweaked formatting.
      
      * fix spelling error
      
      * fix most vexing parse error
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      Co-authored-by: default avatarpfeatherstone <45853521+pfeatherstone@users.noreply.github.com>
      Co-authored-by: default avatarpfeatherstone <peter@me>
      Co-authored-by: default avatarpf <pf@me>
      Co-authored-by: default avatarDavis E. King <davis685@gmail.com>
      2e8bac19
  2. 30 Jul, 2021 1 commit
  3. 04 Apr, 2020 1 commit
  4. 29 Mar, 2020 1 commit
    • Adrià Arrufat's avatar
      Add DCGAN example (#2035) · f42f100d
      Adrià Arrufat authored
      * wip: dcgan-example
      
      * wip: dcgan-example
      
      * update example to use leaky_relu and remove bias from net
      
      * wip
      
      * it works!
      
      * add more comments
      
      * add visualization code
      
      * add example documentation
      
      * rename example
      
      * fix comment
      
      * better comment format
      
      * fix the noise generator seed
      
      * add message to hit enter for image generation
      
      * fix srand, too
      
      * add std::vector overload to update_parameters
      
      * improve training stability
      
      * better naming of variables
      
      make sure it is clear we update the generator with the discriminator's
      gradient using fake samples and true labels
      
      * fix comment: generator -> discriminator
      
      * update leaky_relu docs to match the relu ones
      
      * replace not with !
      
      * add Davis' suggestions to make training more stable
      
      * use tensor instead of resizable_tensor
      
      * do not use dnn_trainer for discriminator
      f42f100d
  5. 07 Feb, 2020 1 commit
  6. 15 Nov, 2019 1 commit
    • Juha Reunanen's avatar
      Instance segmentation (#1918) · d175c350
      Juha Reunanen authored
      * Add instance segmentation example - first version of training code
      
      * Add MMOD options; get rid of the cache approach, and instead load all MMOD rects upfront
      
      * Improve console output
      
      * Set filter count
      
      * Minor tweaking
      
      * Inference - first version, at least compiles!
      
      * Ignore overlapped boxes
      
      * Ignore even small instances
      
      * Set overlaps_ignore
      
      * Add TODO remarks
      
      * Revert "Set overlaps_ignore"
      
      This reverts commit 65adeff1f89af62b10c691e7aa86c04fc358d03e.
      
      * Set result size
      
      * Set label image size
      
      * Take ignore-color into account
      
      * Fix the cropping rect's aspect ratio; also slightly expand the rect
      
      * Draw the largest findings last
      
      * Improve masking of the current instance
      
      * Add some perturbation to the inputs
      
      * Simplify ground-truth reading; fix random cropping
      
      * Read even class labels
      
      * Tweak default minibatch size
      
      * Learn only one class
      
      * Really train only instances of the selected class
      
      * Remove outdated TODO remark
      
      * Automatically skip images with no detections
      
      * Print to console what was found
      
      * Fix class index problem
      
      * Fix indentation
      
      * Allow to choose multiple classes
      
      * Draw rect in the color of the corresponding class
      
      * Write detector window classes to ostream; also group detection windows by class (when ostreaming)
      
      * Train a separate instance segmentation network for each classlabel
      
      * Use separate synchronization file for each seg net of each class
      
      * Allow more overlap
      
      * Fix sorting criterion
      
      * Fix interpolating the predicted mask
      
      * Improve bilinear interpolation: if output type is an integer, round instead of truncating
      
      * Add helpful comments
      
      * Ignore large aspect ratios; refactor the code; tweak some network parameters
      
      * Simplify the segmentation network structure; make the object detection network more complex in turn
      
      * Problem: CUDA errors not reported properly to console
      Solution: stop and join data loader threads even in case of exceptions
      
      * Minor parameters tweaking
      
      * Loss may have increased, even if prob_loss_increasing_thresh > prob_loss_increasing_thresh_max_value
      
      * Add previous_loss_values_dump_amount to previous_loss_values.size() when deciding if loss has been increasing
      
      * Improve behaviour when loss actually increased after disk sync
      
      * Revert some of the earlier change
      
      * Disregard dumped loss values only when deciding if learning rate should be shrunk, but *not* when deciding if loss has been going up since last disk sync
      
      * Revert "Revert some of the earlier change"
      
      This reverts commit 6c852124efe6473a5c962de0091709129d6fcde3.
      
      * Keep enough previous loss values, until the disk sync
      
      * Fix maintaining the dumped (now "effectively disregarded") loss values count
      
      * Detect cats instead of aeroplanes
      
      * Add helpful logging
      
      * Clarify the intention and the code
      
      * Review fixes
      
      * Add operator== for the other pixel types as well; remove the inline
      
      * If available, use constexpr if
      
      * Revert "If available, use constexpr if"
      
      This reverts commit 503d4dd3355ff8ad613116e3ffcc0fa664674f69.
      
      * Simplify code as per review comments
      
      * Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh
      
      * Clarify console output
      
      * Revert "Keep estimating steps_without_progress, even if steps_since_last_learning_rate_shrink < iter_without_progress_thresh"
      
      This reverts commit 9191ebc7762d17d81cdfc334a80ca9a667365740.
      
      * To keep the changes to a bare minimum, revert the steps_since_last_learning_rate_shrink change after all (at least for now)
      
      * Even empty out some of the previous test loss values
      
      * Minor review fixes
      
      * Can't use C++14 features here
      
      * Do not use the struct name as a variable name
      d175c350
  7. 25 Oct, 2019 1 commit
  8. 24 Oct, 2019 1 commit
  9. 25 Dec, 2017 1 commit
  10. 17 Dec, 2017 1 commit
  11. 11 Dec, 2017 1 commit
  12. 01 Dec, 2017 2 commits
  13. 15 Nov, 2017 1 commit
    • Juha Reunanen's avatar
      Add semantic segmentation example (#943) · e48125c2
      Juha Reunanen authored
      * Add example of semantic segmentation using the PASCAL VOC2012 dataset
      
      * Add note about Debug Information Format when using MSVC
      
      * Make the upsampling layers residual as well
      
      * Fix declaration order
      
      * Use a wider net
      
      * trainer.set_iterations_without_progress_threshold(5000); // (was 20000)
      
      * Add residual_up
      
      * Process entire directories of images (just easier to use)
      
      * Simplify network structure so that builds finish even on Visual Studio (faster, or at all)
      
      * Remove the training example from CMakeLists, because it's too much for the 32-bit MSVC++ compiler to handle
      
      * Remove the probably-now-unnecessary set_dnn_prefer_smallest_algorithms call
      
      * Review fix: remove the batch normalization layer from right before the loss
      
      * Review fix: point out that only the Visual C++ compiler has problems.
      Also expand the instructions how to run MSBuild.exe to circumvent the problems.
      
      * Review fix: use dlib::match_endings
      
      * Review fix: use dlib::join_rows. Also add some comments, and instructions where to download the pre-trained net from.
      
      * Review fix: make formatting comply with dlib style conventions.
      
      * Review fix: output training parameters.
      
      * Review fix: remove #ifndef __INTELLISENSE__
      
      * Review fix: use std::string instead of char*
      
      * Review fix: update interpolation_abstract.h to say that extract_image_chips can now take the interpolation method as a parameter
      
      * Fix whitespace formatting
      
      * Add more comments
      
      * Fix finding image files for inference
      
      * Resize inference test output to the size of the input; add clarifying remarks
      
      * Resize net output even in calculate_accuracy
      
      * After all crop the net output instead of resizing it by interpolation
      
      * For clarity, add an empty line in the console output
      e48125c2
  14. 05 Nov, 2017 1 commit
  15. 17 Oct, 2017 3 commits
  16. 16 Sep, 2017 1 commit
  17. 26 Aug, 2017 1 commit
  18. 01 May, 2017 1 commit
  19. 24 Mar, 2017 1 commit
  20. 28 Feb, 2017 1 commit
  21. 27 Feb, 2017 1 commit
  22. 18 Feb, 2017 1 commit
  23. 12 Feb, 2017 1 commit
  24. 11 Feb, 2017 1 commit
  25. 19 Dec, 2016 1 commit
  26. 17 Dec, 2016 2 commits
  27. 09 Oct, 2016 1 commit
  28. 08 Oct, 2016 1 commit
  29. 02 Oct, 2016 2 commits
  30. 05 Sep, 2016 1 commit
  31. 25 Jun, 2016 2 commits
  32. 24 Jun, 2016 1 commit
  33. 23 Jun, 2016 1 commit
  34. 22 Jun, 2016 1 commit