This page documents library components that attempt to find the minimum or maximum of a user supplied function. An introduction to the general purpose non-linear optimizers in this section can be found here. For an example showing how to use the non-linear least squares routines look here.
This method uses an amount of memory that is linear in the number of variables to be optimized. So it is capable of handling problems with a very large number of variables. However, it is generally not as good as the L-BFGS algorithm (see the lbfgs_search_strategy class).
This method uses an amount of memory that is quadratic in the number of variables to be optimized. It is generally very effective but if your problem has a very large number of variables then it isn't appropriate. Instead, you should try the lbfgs_search_strategy.
Note also that this is actually a helper function for creating newton_search_strategy_obj objects.
This method uses an amount of memory that is linear in the number of variables to be optimized. This makes it an excellent method to use when an optimization problem has a large number of variables.
Minimize: f(p) == 0.5*trans(p)*B*p + trans(g)*p subject to the following constraint: length(p) <= radius
Minimize: f(alpha) == 0.5*trans(alpha)*Q*alpha - trans(alpha)*b
subject to the following constraints:
sum(alpha) == C
min(alpha) >= 0
Where f is convex. This means that Q should be symmetric and positive-semidefinite.
Minimize: f(alpha) == 0.5*trans(alpha)*Q*alpha
subject to the following constraints:
sum(alpha) == nu*y.size()
0 <= min(alpha) && max(alpha) <= 1
trans(y)*alpha == 0
Where all elements of y must be equal to +1 or -1 and f is convex.
This means that Q should be symmetric and positive-semidefinite.
Minimize: f(alpha) == 0.5*trans(alpha)*Q*alpha + trans(p)*alpha
subject to the following constraints:
for all i such that y(i) == +1: 0 <= alpha(i) <= Cp
for all i such that y(i) == -1: 0 <= alpha(i) <= Cn
trans(y)*alpha == B
Where all elements of y must be equal to +1 or -1 and f is convex.
This means that Q should be symmetric and positive-semidefinite.
Minimize: f(w) == 0.5*dot(w,w) + C*R(w) Where R(w) is a user-supplied convex function and C > 0
Optimized Cutting Plane Algorithm for Large-Scale Risk Minimization by Vojtech Franc, Soren Sonnenburg; 10(Oct):2157--2192, 2009.
Bundle Methods for Regularized Risk Minimization by Choon Hui Teo, S.V.N. Vishwanthan, Alex J. Smola, Quoc V. Le; 11(Jan):311-365, 2010.
The following paper, published in 2009 by Powell, describes the detailed working of the BOBYQA algorithm.
The BOBYQA algorithm for bound constrained optimization without derivatives by M.J.D. Powell
Note that BOBYQA only works on functions of two or more variables. So if you need to perform derivative-free optimization on a function of a single variable then you should use the find_min_single_variable function.
Note that BOBYQA only works on functions of two or more variables. So if you need to perform derivative-free optimization on a function of a single variable then you should use the find_max_single_variable function.
T. Joachims, T. Finley, Chun-Nam Yu, Cutting-Plane Training of Structural SVMs, Machine Learning, 77(1):27-59, 2009.Note that this object is essentially a tool for solving the 1-Slack structural SVM with margin-rescaling. Specifically, see Algorithm 3 in the above referenced paper.