- 21 Nov, 2020 2 commits
-
-
Adrià Arrufat authored
* Add support for matrix serialization to python API * add double to function names
-
Frankie Robertson authored
-
- 18 Nov, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 17 Nov, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 15 Nov, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 13 Nov, 2020 1 commit
-
-
Adrià Arrufat authored
* Update to PyBind11 v2.2.4 * re-add custom changes * fix indentation * remove blank line
-
- 08 Nov, 2020 2 commits
-
-
Davis King authored
-
Davis King authored
-
- 21 Oct, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 20 Oct, 2020 1 commit
-
-
Adrià Arrufat authored
* wip: layer normalization on cpu * wip: add cuda implementation, nor working yet * wip: try to fix cuda implementation * swap grid_strid_range and grid_strid_range_y: does not work yet * fix CUDA implementation * implement cuda gradient * add documentation, move layer_norm, update bn_visitor * add tests * use stddev instead of variance in test (they are both 1, anyway) * add test for means and invstds on CPU and CUDA * rename visitor to disable_duplicative_bias * handle more cases in the visitor_disable_input_bias * Add tests for visitor_disable_input_bias
-
- 14 Oct, 2020 1 commit
-
-
Adrià Arrufat authored
* fix backtracking when losses stay at inf * always backtrack when there is an inf value
-
- 10 Oct, 2020 1 commit
-
-
Adrià Arrufat authored
* do not use sqrt_2 in device code * use CUDART_SQRT_2PI * better sort includes
-
- 09 Oct, 2020 2 commits
-
-
Adrià Arrufat authored
* Add GELU activation layer * fix some copy-paste leftovers * fix comment * use exact faster implementation * do not use cmath constants
-
Davis King authored
-
- 06 Oct, 2020 1 commit
-
-
Adrià Arrufat authored
* add cuda test for loss_binary_log_per_pixel and some needed refactoring * add cuda test for loss_multiclass_log_per_pixel * forgot to add cpu version in loss * remove a line I added by mistake * fix typos * declare label_to_ignore as static * use tensor_index function instead of index method * test cuda and cpu gradients values * use DLIB_TEST instead of DLIB_CASSERT
-
- 30 Sep, 2020 1 commit
-
-
Adrià Arrufat authored
* add cuda implementation for loss_multiclass_log_per_pixel_weighted * add test for cuda and cpu implementations * fix comment * move weighted label to its own file * Update path in doc Co-authored-by:Davis E. King <davis685@gmail.com>
-
- 25 Sep, 2020 3 commits
-
-
pfeatherstone authored
* [DLIB] STL containers * [DLIB] STL containers * [DLIB] applied code corrections suggested by code review * [DLIB] applied code corrections suggested by code review * [DLIB] applied code corrections suggested by code review
-
aviezab authored
Check if the blas found by pkgconfig is valid before using it.
-
Davis King authored
-
- 24 Sep, 2020 1 commit
-
-
Sajied Shah Yousuf authored
Add copy of license file to root to make github happy.
-
- 19 Sep, 2020 2 commits
-
-
Davis King authored
-
pfeatherstone authored
Extended proxy_(de)serialize objects to work with stringstream, ostringstream, istringstream and vector<char> (#2181) * [DLIB] extended proxy objects to work with strinstream, istringstream, ostringstream and vector<char> * [DLIB] - use std::istream and std::ostream instead of std::istringstream, std::ostringstream and std::stringstream. - put back the filename member variable for better error messages * [DLIB] - review requirement Co-authored-by:pf <pf@pf-ubuntu-dev>
-
- 18 Sep, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 17 Sep, 2020 1 commit
-
-
pfeatherstone authored
* [DLIB] added seekpos and seekoff functions. These are necessary for functions in iostream base class to work properly. e.g. seekg. Note that in seekoff, you do NOT want to check the validity of read_pos after it has been updated. dlib::vectorstream and std::iostream work together to set EOF and/or badbit. Doing something like seekg(10000) should not throw even if the underlying buffer has 2 bytes. You should check if EOF is set and possibly call clear(). We have removed seekg from dlib::vectorstream as this adds confusion. Now std::iostream::seekg is called which somewhere down the callstack will call seekpos and/or seekoff. So there should be no diverging functionality between calling seekg on dlib::vectorstream& or std::iostream& when there is a cast. * [DLIB] vectorstream unit test is updated to run identical tests on dlib::vectorstream& and std::iostream& * [DLIB] only support read pointers and delete copy and move semantics * [DLIB] explicit tests for seekg() in different directions * [DLIB] - no need to delete the move constructor and move assign operator. This is implicitly done by deleting the copy constructor and copy assign operator. * [DLIB] - remove leftover comments. no need - use more idiomatic notation Co-authored-by:pf <pf@pf-ubuntu-dev>
-
- 13 Sep, 2020 2 commits
-
-
Davis King authored
-
pfeatherstone authored
* [DLIB] macro for generating default serialisation functions * [DLIB] refactoring * [DLIB] refactoring
-
- 12 Sep, 2020 1 commit
-
-
Adrià Arrufat authored
* Add scale_prev layer * remove comment and fix gradient * add test for scale_ and scale_prev_ layers
-
- 08 Sep, 2020 1 commit
-
-
Adrià Arrufat authored
-
- 06 Sep, 2020 4 commits
-
-
Davis King authored
The const bug was introduced yesterday and caused some layer visiting to not work on const networks.
-
Adrià Arrufat authored
-
Davis King authored
-
Davis King authored
-
- 05 Sep, 2020 2 commits
-
-
Davis King authored
Now the user doesn't have to supply a visitor capable of visiting all layers, but instead just the ones they are interested in. Also added visit_computational_layers() and visit_computational_layers_range() since those capture a very common use case more concisely than visit_layers(). That is, users generally want to mess with the computational layers specifically as those are the stateful layers.
-
Davis King authored
-
- 03 Sep, 2020 4 commits
-
-
Adrià Arrufat authored
* add visitor to remove bias from bn_ inputs (#closes 2155) * remove unused parameter and make documentation more clear * remove bias from bn_ layers too and use better name * let the batch norm keep their bias, use even better name * be more consistent with impl naming * remove default constructor * do not use method to prevent some errors * add disable bias method to pertinent layers * update dcgan example - grammar - print number of network parameters to be able to check bias is not allocated - at the end, give feedback to the user about what the discriminator thinks about each generated sample * fix fc_ logic * add documentation * add bias_is_disabled methods and update to_xml * print use_bias=false when bias is disabled
-
Davis King authored
Make dnn_trainer use robust statistic to determine if the loss is exploding and if it should backtrack. Previously we used only the non-robust version, and so would mistakenly not catch sequenes of loss increase that begin with an extremely large value and then settled down to still large but less extreme values.
-
Davis King authored
-
Davis King authored
-
- 01 Sep, 2020 2 commits
-
-
Davis King authored
-
Davis King authored
-