1. 11 Oct, 2021 1 commit
    • Adrià Arrufat's avatar
      Add support for fused convolutions (#2294) · adca7472
      Adrià Arrufat authored
      * add helper methods to implement fused convolutions
      
      * fix grammar
      
      * add method to disable affine layer and updated serialization
      
      * add documentation for .disable()
      
      * add fuse_convolutions visitor and documentation
      
      * update docs: net is not constant
      
      * fix xml formatting and use std::boolalpha
      
      * fix warning and updated net requirement for visitor
      
      * fix segfault in fuse_convolutions visitor
      
      * copy unconditionally
      
      * make the visitor class a friend of the con_ class
      
      * setup the biases alias tensor after enabling bias
      
      * simplify visitor a bit
      
      * fix comment
      
      * setup the biases size, somehow this got lost
      
      * copy the parameters before resizing
      
      * remove enable_bias() method, since the visitor is now a friend
      
      * Revert "remove enable_bias() method, since the visitor is now a friend"
      
      This reverts commit 35b92b16316f19a7f1f1b1313c9ab874f4d6199b.
      
      * update the visitor to remove the friend requirement
      
      * improve behavior of enable_bias
      
      * better describe the behavior of enable_bias
      
      * wip: use cudnncudnnConvolutionBiasActivationForward when activation has bias
      
      * wip: fix cpu compilation
      
      * WIP: not working fused ReLU
      
      * WIP: forgot do disable ReLU in visitor (does not change the fact that it does not work)
      
      * WIP: more general set of 4d tensor (still not working)
      
      * fused convolutions seem to be working now, more testing needed
      
      * move visitor to the bottom of the file
      
      * fix CPU-side and code clean up
      
      * Do not try to fuse the activation layers
      
      Fusing the activation layers in one cuDNN call is only supported when using
      the cuDNN ones (ReLU, Sigmoid, TanH...) which might lead to suprising
      behavior. So, let's just fuse the batch norm and the convolution into one
      cuDNN call using the IDENTITY activation function.
      
      * Set the correct forward algorithm for the identity activation
      
      Ref: https://docs.nvidia.com/deeplearning/cudnn/api/index.html#cudnnConvolutionBiasActivationForward
      
      
      
      * move the affine alias template to its original position
      
      * wip
      
      * remove unused param in relu and simplify example (I will delete it before merge)
      
      * simplify conv bias logic and fix deserialization issue
      
      * fix enabling bias on convolutions
      
      * remove test example
      
      * fix typo
      
      * update documentation
      
      * update documentation
      
      * remove ccache leftovers from CMakeLists.txt
      
      * Re-add new line
      
      * fix enable/disable bias on unallocated networks
      
      * update comment to mention cudnnConvolutionBiasActivationForward
      
      * fix typo
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      
      * Apply documentation suggestions from code review
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      
      * update affine docs to talk in terms of gamma and beta
      
      * simplify tensor_conv interface
      
      * fix tensor_conv operator() with biases
      
      * add fuse_layers test
      
      * add an example on how to use the fuse_layers function
      
      * fix typo
      Co-authored-by: default avatarDavis E. King <davis@dlib.net>
      adca7472
  2. 27 Sep, 2021 1 commit
    • Adrià Arrufat's avatar
      Fix trainer with unsupervised loss (#2436) · 8a2c7442
      Adrià Arrufat authored
      * Don't try to use labels in unsupervised losses
      
      I hope that is the right way of fixing this...
      
      * fix it by duplicating most code in send_job (works on my machine)
      
      I will probably need to find a way to reuse the code
      
      * try to fix it reusing the code... not sure though
      
      * Revert "try to fix it reusing the code... not sure though"
      
      This reverts commit f308cac6df712da3619fb05b14f3345f0ec07b9a.
      
      * check the type of the training label to fix the issue instead
      8a2c7442
  3. 25 Sep, 2021 2 commits
  4. 23 Sep, 2021 3 commits
  5. 15 Sep, 2021 1 commit
  6. 13 Sep, 2021 1 commit
  7. 10 Sep, 2021 1 commit
  8. 19 Aug, 2021 2 commits
  9. 15 Aug, 2021 1 commit
  10. 14 Aug, 2021 2 commits
  11. 06 Aug, 2021 1 commit
  12. 05 Aug, 2021 8 commits
  13. 04 Aug, 2021 1 commit
  14. 30 Jul, 2021 1 commit
  15. 27 Jul, 2021 1 commit
  16. 22 Jul, 2021 1 commit
  17. 16 Jul, 2021 1 commit
  18. 30 Jun, 2021 1 commit
  19. 12 May, 2021 2 commits
  20. 11 May, 2021 2 commits
  21. 10 May, 2021 1 commit
  22. 01 May, 2021 4 commits
  23. 28 Apr, 2021 1 commit
    • Davis King's avatar
      Cleanup gcc version checking code a little. · ded68b9a
      Davis King authored
      Also fix this error from cmake 3.5.1:
      
      ```
      CMake Error at CMakeLists.txt:62 (if):
        if given arguments:
      
          "CMAKE_COMPILER_IS_GNUCXX" "AND" "CMAKE_CXX_COMPILER_VERSION" "VERSION_LESS_EQUAL" "4.8.5"
      
        Unknown arguments specified
      ```
      ded68b9a