Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dlib
Commits
93b83677
Commit
93b83677
authored
Jul 03, 2018
by
Davis King
Browse files
clarified docs
parent
0f169ed7
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
63 additions
and
68 deletions
+63
-68
tools/python/src/global_optimization.cpp
tools/python/src/global_optimization.cpp
+63
-68
No files found.
tools/python/src/global_optimization.cpp
View file @
93b83677
...
@@ -245,6 +245,57 @@ function_evaluation py_function_evaluation(
...
@@ -245,6 +245,57 @@ function_evaluation py_function_evaluation(
void
bind_global_optimization
(
py
::
module
&
m
)
void
bind_global_optimization
(
py
::
module
&
m
)
{
{
const
char
*
docstring
=
"requires
\n
\
- len(bound1) == len(bound2) == len(is_integer_variable)
\n
\
- for all valid i: bound1[i] != bound2[i]
\n
\
- solver_epsilon >= 0
\n
\
- f() is a real valued multi-variate function. It must take scalar real
\n
\
numbers as its arguments and the number of arguments must be len(bound1).
\n
\
ensures
\n
\
- This function performs global optimization on the given f() function.
\n
\
The goal is to maximize the following objective function:
\n
\
f(x)
\n
\
subject to the constraints:
\n
\
min(bound1[i],bound2[i]) <= x[i] <= max(bound1[i],bound2[i])
\n
\
if (is_integer_variable[i]) then x[i] is an integer value (but still
\n
\
represented with float type).
\n
\
- find_max_global() runs until it has called f() num_function_calls times.
\n
\
Then it returns the best x it has found along with the corresponding output
\n
\
of f(). That is, it returns (best_x_seen,f(best_x_seen)). Here best_x_seen
\n
\
is a list containing the best arguments to f() this function has found.
\n
\
- find_max_global() uses a global optimization method based on a combination of
\n
\
non-parametric global function modeling and quadratic trust region modeling
\n
\
to efficiently find a global maximizer. It usually does a good job with a
\n
\
relatively small number of calls to f(). For more information on how it
\n
\
works read the documentation for dlib's global_function_search object.
\n
\
However, one notable element is the solver epsilon, which you can adjust.
\n
\
\n
\
The search procedure will only attempt to find a global maximizer to at most
\n
\
solver_epsilon accuracy. Once a local maximizer is found to that accuracy
\n
\
the search will focus entirely on finding other maxima elsewhere rather than
\n
\
on further improving the current local optima found so far. That is, once a
\n
\
local maxima is identified to about solver_epsilon accuracy, the algorithm
\n
\
will spend all its time exploring the function to find other local maxima to
\n
\
investigate. An epsilon of 0 means it will keep solving until it reaches
\n
\
full floating point precision. Larger values will cause it to switch to pure
\n
\
global exploration sooner and therefore might be more effective if your
\n
\
objective function has many local maxima and you don't care about a super
\n
\
high precision solution.
\n
\
- Any variables that satisfy the following conditions are optimized on a log-scale:
\n
\
- The lower bound on the variable is > 0
\n
\
- The ratio of the upper bound to lower bound is > 1000
\n
\
- The variable is not an integer variable
\n
\
We do this because it's common to optimize machine learning models that have
\n
\
parameters with bounds in a range such as [1e-5 to 1e10] (e.g. the SVM C
\n
\
parameter) and it's much more appropriate to optimize these kinds of
\n
\
variables on a log scale. So we transform them by applying log() to
\n
\
them and then undo the transform via exp() before invoking the function
\n
\
being optimized. Therefore, this transformation is invisible to the user
\n
\
supplied functions. In most cases, it improves the efficiency of the
\n
\
optimizer."
;
/*!
/*!
requires
requires
- len(bound1) == len(bound2) == len(is_integer_variable)
- len(bound1) == len(bound2) == len(is_integer_variable)
...
@@ -258,7 +309,8 @@ void bind_global_optimization(py::module& m)
...
@@ -258,7 +309,8 @@ void bind_global_optimization(py::module& m)
f(x)
f(x)
subject to the constraints:
subject to the constraints:
min(bound1[i],bound2[i]) <= x[i] <= max(bound1[i],bound2[i])
min(bound1[i],bound2[i]) <= x[i] <= max(bound1[i],bound2[i])
if (is_integer_variable[i]) then x[i] is an integer.
if (is_integer_variable[i]) then x[i] is an integer value (but still
represented with float type).
- find_max_global() runs until it has called f() num_function_calls times.
- find_max_global() runs until it has called f() num_function_calls times.
Then it returns the best x it has found along with the corresponding output
Then it returns the best x it has found along with the corresponding output
of f(). That is, it returns (best_x_seen,f(best_x_seen)). Here best_x_seen
of f(). That is, it returns (best_x_seen,f(best_x_seen)). Here best_x_seen
...
@@ -294,83 +346,26 @@ void bind_global_optimization(py::module& m)
...
@@ -294,83 +346,26 @@ void bind_global_optimization(py::module& m)
supplied functions. In most cases, it improves the efficiency of the
supplied functions. In most cases, it improves the efficiency of the
optimizer.
optimizer.
!*/
!*/
{
m
.
def
(
"find_max_global"
,
&
py_find_max_global
,
docstring
,
py
::
arg
(
"f"
),
m
.
def
(
"find_max_global"
,
&
py_find_max_global
,
py
::
arg
(
"bound1"
),
py
::
arg
(
"bound2"
),
py
::
arg
(
"is_integer_variable"
),
"requires
\n
\
py
::
arg
(
"num_function_calls"
),
py
::
arg
(
"solver_epsilon"
)
=
0
);
- len(bound1) == len(bound2) == len(is_integer_variable)
\n
\
- for all valid i: bound1[i] != bound2[i]
\n
\
- solver_epsilon >= 0
\n
\
- f() is a real valued multi-variate function. It must take scalar real
\n
\
numbers as its arguments and the number of arguments must be len(bound1).
\n
\
ensures
\n
\
- This function performs global optimization on the given f() function.
\n
\
The goal is to maximize the following objective function:
\n
\
f(x)
\n
\
subject to the constraints:
\n
\
min(bound1[i],bound2[i]) <= x[i] <= max(bound1[i],bound2[i])
\n
\
if (is_integer_variable[i]) then x[i] is an integer.
\n
\
- find_max_global() runs until it has called f() num_function_calls times.
\n
\
Then it returns the best x it has found along with the corresponding output
\n
\
of f(). That is, it returns (best_x_seen,f(best_x_seen)). Here best_x_seen
\n
\
is a list containing the best arguments to f() this function has found.
\n
\
- find_max_global() uses a global optimization method based on a combination of
\n
\
non-parametric global function modeling and quadratic trust region modeling
\n
\
to efficiently find a global maximizer. It usually does a good job with a
\n
\
relatively small number of calls to f(). For more information on how it
\n
\
works read the documentation for dlib's global_function_search object.
\n
\
However, one notable element is the solver epsilon, which you can adjust.
\n
\
\n
\
The search procedure will only attempt to find a global maximizer to at most
\n
\
solver_epsilon accuracy. Once a local maximizer is found to that accuracy
\n
\
the search will focus entirely on finding other maxima elsewhere rather than
\n
\
on further improving the current local optima found so far. That is, once a
\n
\
local maxima is identified to about solver_epsilon accuracy, the algorithm
\n
\
will spend all its time exploring the function to find other local maxima to
\n
\
investigate. An epsilon of 0 means it will keep solving until it reaches
\n
\
full floating point precision. Larger values will cause it to switch to pure
\n
\
global exploration sooner and therefore might be more effective if your
\n
\
objective function has many local maxima and you don't care about a super
\n
\
high precision solution.
\n
\
- Any variables that satisfy the following conditions are optimized on a log-scale:
\n
\
- The lower bound on the variable is > 0
\n
\
- The ratio of the upper bound to lower bound is > 1000
\n
\
- The variable is not an integer variable
\n
\
We do this because it's common to optimize machine learning models that have
\n
\
parameters with bounds in a range such as [1e-5 to 1e10] (e.g. the SVM C
\n
\
parameter) and it's much more appropriate to optimize these kinds of
\n
\
variables on a log scale. So we transform them by applying log() to
\n
\
them and then undo the transform via exp() before invoking the function
\n
\
being optimized. Therefore, this transformation is invisible to the user
\n
\
supplied functions. In most cases, it improves the efficiency of the
\n
\
optimizer."
,
py
::
arg
(
"f"
),
py
::
arg
(
"bound1"
),
py
::
arg
(
"bound2"
),
py
::
arg
(
"is_integer_variable"
),
py
::
arg
(
"num_function_calls"
),
py
::
arg
(
"solver_epsilon"
)
=
0
);
}
{
m
.
def
(
"find_max_global"
,
&
py_find_max_global2
,
m
.
def
(
"find_max_global"
,
&
py_find_max_global2
,
"This function simply calls the other version of find_max_global() with is_integer_variable set to False for all variables."
,
"This function simply calls the other version of find_max_global() with is_integer_variable set to False for all variables."
,
py
::
arg
(
"f"
),
py
::
arg
(
"bound1"
),
py
::
arg
(
"bound2"
),
py
::
arg
(
"num_function_calls"
),
py
::
arg
(
"solver_epsilon"
)
=
0
py
::
arg
(
"f"
),
py
::
arg
(
"bound1"
),
py
::
arg
(
"bound2"
),
py
::
arg
(
"num_function_calls"
),
);
py
::
arg
(
"solver_epsilon"
)
=
0
);
}
{
m
.
def
(
"find_min_global"
,
&
py_find_min_global
,
m
.
def
(
"find_min_global"
,
&
py_find_min_global
,
"This function is just like find_max_global(), except it performs minimization rather than maximization."
"This function is just like find_max_global(), except it performs minimization rather than maximization."
,
,
py
::
arg
(
"f"
),
py
::
arg
(
"bound1"
),
py
::
arg
(
"bound2"
),
py
::
arg
(
"is_integer_variable"
),
py
::
arg
(
"f"
),
py
::
arg
(
"bound1"
),
py
::
arg
(
"bound2"
),
py
::
arg
(
"is_integer_variable"
),
py
::
arg
(
"num_function_calls"
),
py
::
arg
(
"solver_epsilon"
)
=
0
py
::
arg
(
"num_function_calls"
),
py
::
arg
(
"solver_epsilon"
)
=
0
);
);
}
{
m
.
def
(
"find_min_global"
,
&
py_find_min_global2
,
m
.
def
(
"find_min_global"
,
&
py_find_min_global2
,
"This function simply calls the other version of find_min_global() with is_integer_variable set to False for all variables."
,
"This function simply calls the other version of find_min_global() with is_integer_variable set to False for all variables."
,
py
::
arg
(
"f"
),
py
::
arg
(
"bound1"
),
py
::
arg
(
"bound2"
),
py
::
arg
(
"num_function_calls"
),
py
::
arg
(
"solver_epsilon"
)
=
0
py
::
arg
(
"f"
),
py
::
arg
(
"bound1"
),
py
::
arg
(
"bound2"
),
py
::
arg
(
"num_function_calls"
),
);
py
::
arg
(
"solver_epsilon"
)
=
0
);
}
// -------------------------------------------------
// -------------------------------------------------
// -------------------------------------------------
// -------------------------------------------------
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment