Frequently Asked Questions If you use dlib in your research then please use the following citation:

Davis E. King. Dlib-ml: A Machine Learning Toolkit. Journal of Machine Learning Research 10, pp. 1755-1758, 2009

@Article{dlib09,
  author = {Davis E. King},
  title = {Dlib-ml: A Machine Learning Toolkit},
  journal = {Journal of Machine Learning Research},
  year = {2009},
  volume = {10},
  pages = {1755-1758},
}
         
Here are the possibilities:
  • You are using a file stream and forgot to put it into binary mode. You need to do something like this: std::ifstream fin("myfile", std::ios::binary); or std::ofstream fout("myfile", std::ios::binary); If you don't give std::ios::binary then the iostream will mess with the binary data and cause serialization to not work right.

  • The iostream is in a bad state. You can check the state by calling mystream.good(). If it returns false then the stream is in an error state such as end-of-file or maybe it failed to do the I/O. Also note that if you close a file stream and reopen it you might have to call mystream.clear() to clear out the error flags.
Long answer, read the matrix example program.

Short answer, here are some examples: matrix<double> mat; mat.set_size(4,5); matrix<double,0,1> column_vect; column_vect.set_size(6); matrix<double,0,1> column_vect2(6); // give size to constructor matrix<double,1> row_vect; row_vect.set_size(5);
If you can't find something then check the index.

Also, the bulk of the documentation can be found by following the Detailed Documentation links.
There should never be anything in dlib that prevents you from using or interacting with other libraries. Moreover, there are some additional tools in dlib to make some interactions easier:
  • BLAS and LAPACK libraries are used by the matrix automatically if you #define DLIB_USE_BLAS and/or DLIB_USE_LAPACK and link against the appropriate library files. Note that the CMakeLists.txt file that comes with dlib will do this for you automatically in many instances.

  • Armadillo and Eigen libraries have matrix objects which can be converted into dlib matrix objects by calling dlib::mat() on them.

  • OpenCV image objects can be converted into a form usable by dlib routines by using cv_image. You can also convert from a dlib matrix or image to an OpenCV Mat using dlib::toMat().

  • Google Protocol Buffers can be serialized by the dlib serialization routines. This means that, for example, you can pass protocol buffer objects through a bridge.

  • libpng and libjpeg are used by load_image whenever DLIB_PNG_SUPPORT and DLIB_JPEG_SUPPORT are defined respectively. You must also tell your compiler to link against these libraries to use them. However, CMake will try to link against them automatically if they are installed.

  • FFTW is used by the fft() and ifft() routines if you #define DLIB_USE_FFTW and link to fftw3. Otherwise dlib uses its own slower default implementation.

  • SQLite is used by the database object. In fact, it is just a wrapper around SQLite's C interface which simplifies its use (e.g. makes resource management use RAII).
The optimization algorithm is somewhat unpredictable. Sometimes it is fast and sometimes it is slow. What usually makes it really slow is if you use a radial basis kernel and you set the gamma parameter to something too large. This causes the algorithm to start using a whole lot of relevance vectors (i.e. basis vectors) which then makes it slow. The algorithm is only fast as long as the number of relevance vectors remains small but it is hard to know beforehand if that will be the case.

You should try kernel ridge regression instead since it also doesn't take any parameters but is always very fast.

This function makes a copy of your training data for each thread. So you are probably running out of memory. To avoid this, use the randomly_subsample function to reduce the amount of data you are using or use fewer threads.

For example, you could reduce the amount of data by saying this: // reduce to only 1000 samples cross_validate_trainer_threaded(trainer, randomly_subsample(samples, 1000), randomly_subsample(labels, 1000), 4, // num folds 4); // num threads

See the Using Custom Kernels example program.

Picking the right kernel all comes down to understanding your data, and obviously this is highly dependent on your problem.

One thing that's sometimes useful is to plot each feature against the target value. You can get an idea of what your overall feature space looks like and maybe tell if a linear kernel is the right solution. But this still hides important information from you. For example, imagine you have two diagonal lines which are very close together and are both the same length. Suppose one line is of the +1 class and the other is the -1 class. Each feature (the x or y coordinate values) by itself tells you almost nothing about which class a point belongs to but together they tell you everything you need to know.

On the other hand, if you know something about the data you are working with then you can also try and generate your own features. So for example, if your data is a bunch of images and you know that one of your classes contains a lot of lines then you can make a feature that attempts to measure the number of lines in an image using a hough transform or sobel edge filter or whatever. Generally, try and think up features which should be highly correlated with your target value. A good way to do this is to try and actually hand code N solutions to the problem using whatever you know about your data or domain. If you do a good job then you will have N really great features and a linear or rbf kernel will probably do very well when using them.

Or you can just try a whole bunch of kernels, kernel parameters, and training algorithm options while using cross validation. I.e. when in doubt, use brute force :) There is an example of that kind of thing in the model selection example program.

This happens when you use the radial_basis_kernel and you set the gamma value to something highly inappropriate. To understand what's happening lets imagine your data has just one feature and its value ranges from 0 to 7. Then what you want is a gamma value that gives nice Gaussian bumps like the one in this graph:

However, if you make gamma really huge you will get this (it's zero everywhere except for one place):

Or if you make gamma really small then it will be 1.0 everywhere:

So you need to pick the gamma value so that it is scaled reasonably to your data. A good rule of thumb (i.e. not the optimal gamma, just a heuristic guess) is the following:

const double gamma = 1.0/compute_mean_squared_distance(randomly_subsample(samples, 2000));