- 17 Sep, 2020 1 commit
-
-
pfeatherstone authored
* [DLIB] added seekpos and seekoff functions. These are necessary for functions in iostream base class to work properly. e.g. seekg. Note that in seekoff, you do NOT want to check the validity of read_pos after it has been updated. dlib::vectorstream and std::iostream work together to set EOF and/or badbit. Doing something like seekg(10000) should not throw even if the underlying buffer has 2 bytes. You should check if EOF is set and possibly call clear(). We have removed seekg from dlib::vectorstream as this adds confusion. Now std::iostream::seekg is called which somewhere down the callstack will call seekpos and/or seekoff. So there should be no diverging functionality between calling seekg on dlib::vectorstream& or std::iostream& when there is a cast. * [DLIB] vectorstream unit test is updated to run identical tests on dlib::vectorstream& and std::iostream& * [DLIB] only support read pointers and delete copy and move semantics * [DLIB] explicit tests for seekg() in different directions * [DLIB] - no need to delete the move constructor and move assign operator. This is implicitly done by deleting the copy constructor and copy assign operator. * [DLIB] - remove leftover comments. no need - use more idiomatic notation Co-authored-by:pf <pf@pf-ubuntu-dev>
-
- 13 Sep, 2020 2 commits
-
-
Davis King authored
-
pfeatherstone authored
* [DLIB] macro for generating default serialisation functions * [DLIB] refactoring * [DLIB] refactoring
-
- 12 Sep, 2020 1 commit
-
-
Adrià Arrufat authored
* Add scale_prev layer * remove comment and fix gradient * add test for scale_ and scale_prev_ layers
-
- 08 Sep, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 06 Sep, 2020 4 commits
-
-
Davis King authored
The const bug was introduced yesterday and caused some layer visiting to not work on const networks.
-
Adrià Arrufat authored
-
Davis King authored
-
Davis King authored
-
- 05 Sep, 2020 2 commits
-
-
Davis King authored
Now the user doesn't have to supply a visitor capable of visiting all layers, but instead just the ones they are interested in. Also added visit_computational_layers() and visit_computational_layers_range() since those capture a very common use case more concisely than visit_layers(). That is, users generally want to mess with the computational layers specifically as those are the stateful layers.
-
Davis King authored
-
- 03 Sep, 2020 4 commits
-
-
Adrià Arrufat authored
* add visitor to remove bias from bn_ inputs (#closes 2155) * remove unused parameter and make documentation more clear * remove bias from bn_ layers too and use better name * let the batch norm keep their bias, use even better name * be more consistent with impl naming * remove default constructor * do not use method to prevent some errors * add disable bias method to pertinent layers * update dcgan example - grammar - print number of network parameters to be able to check bias is not allocated - at the end, give feedback to the user about what the discriminator thinks about each generated sample * fix fc_ logic * add documentation * add bias_is_disabled methods and update to_xml * print use_bias=false when bias is disabled
-
Davis King authored
Make dnn_trainer use robust statistic to determine if the loss is exploding and if it should backtrack. Previously we used only the non-robust version, and so would mistakenly not catch sequenes of loss increase that begin with an extremely large value and then settled down to still large but less extreme values.
-
Davis King authored
-
Davis King authored
-
- 01 Sep, 2020 2 commits
-
-
Davis King authored
-
Davis King authored
-
- 29 Aug, 2020 2 commits
-
-
Davis King authored
-
Adrià Arrufat authored
-
- 24 Aug, 2020 2 commits
-
-
Davis King authored
-
Adrià Arrufat authored
* add loss_multilabel_log * add alias template for loss_multilabel_log * add missing assert * increment truth iterator * rename loss to loss_multibinary_log * rename loss to loss_multibinary_log * explicitly capture dims in lambda
-
- 20 Aug, 2020 1 commit
-
-
Juha Reunanen authored
Problem: With certain batch size / device count combinations, batches were generated with size = 1, causing problems when using batch normalization. (#2152) Solution: Divide the mini-batch more uniformly across the different devices.
-
- 19 Aug, 2020 3 commits
-
-
Davis King authored
-
Juha Reunanen authored
-
samaldana authored
When consuming dlib headers and building using gcc/clang with flags '-Werror -Wpedantic', any inclusion involving DLIB_CASSERT triggers a compilation error: ISO C++11 requires at least one argument for the "..." in a variadic macro Co-authored-by:Samuel Aldana <samuel.aldana@cognex.com>
-
- 17 Aug, 2020 1 commit
-
-
pfeatherstone authored
* Added a function for computing a gaussian distributed complex number. The real version is adapted to use the complex version * Missing header * missed std:: I was too quick Co-authored-by:pf <pf@pf-ubuntu-dev>
-
- 13 Aug, 2020 2 commits
-
-
Davis King authored
The recent change to use a dlib/__init__.py file instead of the dlib.so file directly messed it up.
-
Davis King authored
-
- 08 Aug, 2020 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 07 Aug, 2020 3 commits
-
-
Davis King authored
-
Davis King authored
-
Davis King authored
-
- 05 Aug, 2020 2 commits
-
-
Davis King authored
-
yuriio authored
* Added possibility to load PNG images from a data buffer. * Fixed code not compiling with some versions of libpng that doesn't have const specifier. * Used FileInfo struct as a single parameter for the read_image method.
-
- 02 Aug, 2020 1 commit
-
-
Davis King authored
-
- 01 Aug, 2020 3 commits
-
-
Davis King authored
error: calling a constexpr host function("log1p") from a device function("cuda_log1pexp") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this. The error only happens with some versions of CUDA. -
Davis King authored
-
Davis King authored
-