Commit a5876489 authored by Guoxin's avatar Guoxin Committed by Yan Ni
Browse files

Add GP Tuner and related doc (#1191)

* fix link err in docs

* add spaces

* re-organise links for detailed descriptions of the tuners and accessors; fix link err in HpoComparision.md

* add in-page link by change .md to .html

* delete #section from cross-file links to make links work in both readthedocs and github docs

* gp_tuner init from fmfn's repo

* fix params bug by adding float>int transition

* add optimal choices; support randint&quniform type; add doc

* refine doc and code

* change mnist yml comments

* typo fix

* fix val err

* fix minimize mode err

* add config test and Hpo result

* support quniform type; update doc; update test config

* update doc

* un-commit changed in yarn.lock

* fix optimize mode bug

* optimize mode

* optimize mode

* reset pylint, gitignore

* revert .gitignore yarn.lock
parent c2179921
......@@ -19,7 +19,7 @@ Currently we support the following algorithms:
|[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)|
|[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)|
|[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)|
|[__GP Tuner__](#GPTuner)|Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. [Reference Paper, ](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)[Github Repo](https://github.com/fmfn/BayesianOptimization)|
<br>
## Usage of Builtin Tuners
......@@ -366,3 +366,45 @@ advisor:
max_budget: 27
eta: 3
```
<br>
<a name="GPTuner"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `GP Tuner`
> Builtin Tuner Name: **GPTuner**
Note that the only acceptable types of search space are `choice`, `randint`, `uniform`, `quniform`, `loguniform`, `qloguniform`.
**Suggested scenario**
As a strategy in Sequential Model-based Global Optimization(SMBO) algorithm, GP Tuner uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore GP Tuner is most adequate for situations where the function to be optimized is a very expensive endeavor. GP can be used when the computation resource is limited. While GP Tuner has a computationoal cost that grows at *O(N^3)* due to the requirement of inverting the Gram matrix, so it's not suitable when lots of trials are needed. [Detailed Description](./GPTuner.md)
**Requirement of classArg**
* **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.
* **utility** (*'ei', 'ucb' or 'poi', optional, default = 'ei'*) - The kind of utility function(acquisition function). 'ei', 'ucb' and 'poi' corresponds to 'Expected Improvement', 'Upper Confidence Bound' and 'Probability of Improvement' respectively.
* **kappa** (*float, optional, default = 5*) - Used by utility function 'ucb'. The bigger `kappa` is, the more the tuner will be exploratory.
* **xi** (*float, optional, default = 0*) - Used by utility function 'ei' and 'poi'. The bigger `xi` is, the more the tuner will be exploratory.
* **nu** (*float, optional, default = 2.5*) - Used to specify Matern kernel. The smaller nu, the less smooth the approximated function is.
* **alpha** (*float, optional, default = 1e-6*) - Used to specify Gaussian Process Regressor. Larger values correspond to increased noise level in the observations.
* **cold_start_num** (*int, optional, default = 10*) - Number of random exploration to perform before Gaussian Process. Random exploration can help by diversifying the exploration space.
* **selection_num_warm_up** (*int, optional, default = 1e5*) - Number of random points to evaluate for getting the point which maximizes the acquisition function.
* **selection_num_starting_points** (*int, optional, default = 250*) - Nnumber of times to run L-BFGS-B from a random starting point after the warmup.
**Usage example**
```yaml
# config.yml
tuner:
builtinTunerName: GPTuner
classArgs:
optimize_mode: maximize
kappa: 5
xi: 0
nu: 2.5
alpha: 1e-6
cold_start_num: 10
selection_num_warm_up: 100000
selection_num_starting_points: 250
```
......@@ -98,8 +98,11 @@ The total search space is 1,204,224, we set the number of maximum trial to 1000.
| HyperBand |0.414065|0.415222|0.417628|
| HyperBand |0.416807|0.417549|0.418828|
| HyperBand |0.415550|0.415977|0.417186|
| GP |0.414353|0.418563|0.420263|
| GP |0.414395|0.418006|0.420431|
| GP |0.412943|0.416566|0.418443|
For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process.
In this example, all the algorithms are used with default parameters. For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process.
## RocksDB Benchmark 'fillrandom' and 'readrandom'
......
GP Tuner on NNI
===
## GP Tuner
Bayesian optimization works by constructing a posterior distribution of functions (Gaussian Process here) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not.
GP Tuner is designed to minimize/maximize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor.
This optimization approach is described in Section 3 of [Algorithms for Hyper-Parameter Optimization](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf).
......@@ -85,6 +85,7 @@ All types of sampling strategies and their parameter are listed here:
| Grid Search Tuner | &#10003; | | | &#10003; | | &#10003; | | | | |
| Hyperband Advisor | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| Metis Tuner | &#10003; | &#10003; | &#10003; | &#10003; | | | | | | |
| GP Tuner | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | | | | |
Known Limitations:
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space_metis.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: MetisTuner
classArgs:
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
......
......@@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
......
......@@ -162,7 +162,7 @@ export namespace ValidationSchemas {
checkpointDir: joi.string().allow('')
}),
tuner: joi.object({
builtinTunerName: joi.string().valid('TPE', 'Random', 'Anneal', 'Evolution', 'SMAC', 'BatchTuner', 'GridSearch', 'NetworkMorphism', 'MetisTuner'),
builtinTunerName: joi.string().valid('TPE', 'Random', 'Anneal', 'Evolution', 'SMAC', 'BatchTuner', 'GridSearch', 'NetworkMorphism', 'MetisTuner', 'GPTuner'),
codeDir: joi.string(),
classFileName: joi.string(),
className: joi.string(),
......
......@@ -29,7 +29,8 @@ ModuleName = {
'GridSearch': 'nni.gridsearch_tuner.gridsearch_tuner',
'NetworkMorphism': 'nni.networkmorphism_tuner.networkmorphism_tuner',
'Curvefitting': 'nni.curvefitting_assessor.curvefitting_assessor',
'MetisTuner': 'nni.metis_tuner.metis_tuner'
'MetisTuner': 'nni.metis_tuner.metis_tuner',
'GPTuner': 'nni.gp_tuner.gp_tuner'
}
ClassName = {
......@@ -42,6 +43,7 @@ ClassName = {
'GridSearch': 'GridSearchTuner',
'NetworkMorphism':'NetworkMorphismTuner',
'MetisTuner':'MetisTuner',
'GPTuner':'GPTuner',
'Medianstop': 'MedianstopAssessor',
'Curvefitting': 'CurvefittingAssessor'
......
# Copyright (c) Microsoft Corporation
# All rights reserved.
#
# MIT License
#
# Permission is hereby granted, free of charge,
# to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and
# to permit persons to whom the Software is furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
'''
gp_tuner.py
'''
import warnings
import logging
import numpy as np
from sklearn.gaussian_process.kernels import Matern
from sklearn.gaussian_process import GaussianProcessRegressor
from nni.tuner import Tuner
from nni.utils import OptimizeMode, extract_scalar_reward
from .target_space import TargetSpace
from .util import UtilityFunction, acq_max
logger = logging.getLogger("GP_Tuner_AutoML")
class GPTuner(Tuner):
'''
GPTuner
'''
def __init__(self, optimize_mode="maximize", utility='ei', kappa=5, xi=0, nu=2.5, alpha=1e-6, cold_start_num=10,
selection_num_warm_up=100000, selection_num_starting_points=250):
self.optimize_mode = OptimizeMode(optimize_mode)
# utility function related
self.utility = utility
self.kappa = kappa
self.xi = xi
# target space
self._space = None
self._random_state = np.random.RandomState()
# nu, alpha are GPR related params
self._gp = GaussianProcessRegressor(
kernel=Matern(nu=nu),
alpha=alpha,
normalize_y=True,
n_restarts_optimizer=25,
random_state=self._random_state
)
# num of random evaluations before GPR
self._cold_start_num = cold_start_num
# params for acq_max
self._selection_num_warm_up = selection_num_warm_up
self._selection_num_starting_points = selection_num_starting_points
# num of imported data
self.supplement_data_num = 0
def update_search_space(self, search_space):
"""Update the self.bounds and self.types by the search_space.json
Parameters
----------
search_space : dict
"""
self._space = TargetSpace(search_space, self._random_state)
def generate_parameters(self, parameter_id):
"""Generate next parameter for trial
If the number of trial result is lower than cold start number,
gp will first randomly generate some parameters.
Otherwise, choose the parameters by the Gussian Process Model
Parameters
----------
parameter_id : int
Returns
-------
result : dict
"""
if self._space.len() < self._cold_start_num:
results = self._space.random_sample()
else:
# Sklearn's GP throws a large number of warnings at times, but
# we don't really need to see them here.
with warnings.catch_warnings():
warnings.simplefilter("ignore")
self._gp.fit(self._space.params, self._space.target)
util = UtilityFunction(
kind=self.utility, kappa=self.kappa, xi=self.xi)
results = acq_max(
f_acq=util.utility,
gp=self._gp,
y_max=self._space.target.max(),
bounds=self._space.bounds,
space=self._space,
num_warmup=self._selection_num_warm_up,
num_starting_points=self._selection_num_starting_points
)
results = self._space.array_to_params(results)
logger.info("Generate paramageters:\n %s", results)
return results
def receive_trial_result(self, parameter_id, parameters, value):
"""Tuner receive result from trial.
Parameters
----------
parameter_id : int
parameters : dict
value : dict/float
if value is dict, it should have "default" key.
"""
value = extract_scalar_reward(value)
if self.optimize_mode == OptimizeMode.Minimize:
value = -value
logger.info("Received trial result.")
logger.info("value :%s", value)
logger.info("parameter : %s", parameters)
self._space.register(parameters, value)
def import_data(self, data):
"""Import additional data for tuning
Parameters
----------
data:
a list of dictionarys, each of which has at least two keys, 'parameter' and 'value'
"""
_completed_num = 0
for trial_info in data:
logger.info("Importing data, current processing progress %s / %s" %
(_completed_num, len(data)))
_completed_num += 1
assert "parameter" in trial_info
_params = trial_info["parameter"]
assert "value" in trial_info
_value = trial_info['value']
if not _value:
logger.info(
"Useless trial data, value is %s, skip this trial data." % _value)
continue
self.supplement_data_num += 1
_parameter_id = '_'.join(
["ImportData", str(self.supplement_data_num)])
self.receive_trial_result(
parameter_id=_parameter_id, parameters=_params, value=_value)
logger.info("Successfully import data to GP tuner.")
# Copyright (c) Microsoft Corporation
# All rights reserved.
#
# MIT License
#
# Permission is hereby granted, free of charge,
# to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and
# to permit persons to whom the Software is furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
'''
target_space.py
'''
import numpy as np
import nni.parameter_expressions as parameter_expressions
def _hashable(params):
""" ensure that an point is hashable by a python dict """
return tuple(map(float, params))
class TargetSpace():
"""
Holds the param-space coordinates (X) and target values (Y)
"""
def __init__(self, pbounds, random_state=None):
"""
Parameters
----------
pbounds : dict
Dictionary with parameters names as keys and a tuple with minimum
and maximum values.
random_state : int, RandomState, or None
optionally specify a seed for a random number generator
"""
self.random_state = random_state
# Get the name of the parameters
self._keys = sorted(pbounds)
# Create an array with parameters bounds
self._bounds = np.array(
[item[1] for item in sorted(pbounds.items(), key=lambda x: x[0])]
)
# preallocated memory for X and Y points
self._params = np.empty(shape=(0, self.dim))
self._target = np.empty(shape=(0))
# keep track of unique points we have seen so far
self._cache = {}
def __contains__(self, params):
'''
check if a parameter is already registered
'''
return _hashable(params) in self._cache
def len(self):
'''
length of registered params and targets
'''
assert len(self._params) == len(self._target)
return len(self._target)
@property
def params(self):
'''
params: numpy array
'''
return self._params
@property
def target(self):
'''
target: numpy array
'''
return self._target
@property
def dim(self):
'''
dim: int
length of keys
'''
return len(self._keys)
@property
def keys(self):
'''
keys: numpy array
'''
return self._keys
@property
def bounds(self):
'''bounds'''
return self._bounds
def params_to_array(self, params):
''' dict to array '''
try:
assert set(params) == set(self.keys)
except AssertionError:
raise ValueError(
"Parameters' keys ({}) do ".format(sorted(params)) +
"not match the expected set of keys ({}).".format(self.keys)
)
return np.asarray([params[key] for key in self.keys])
def array_to_params(self, x):
'''
array to dict
maintain int type if the paramters is defined as int in search_space.json
'''
try:
assert len(x) == len(self.keys)
except AssertionError:
raise ValueError(
"Size of array ({}) is different than the ".format(len(x)) +
"expected number of parameters ({}).".format(self.dim())
)
params = {}
for i, _bound in enumerate(self._bounds):
if _bound['_type'] == 'choice' and all(isinstance(val, int) for val in _bound['_value']):
params.update({self.keys[i]: int(x[i])})
elif _bound['_type'] in ['randint']:
params.update({self.keys[i]: int(x[i])})
else:
params.update({self.keys[i]: x[i]})
return params
def register(self, params, target):
"""
Append a point and its target value to the known data.
Parameters
----------
x : dict
y : float
target function value
"""
x = self.params_to_array(params)
if x in self:
#raise KeyError('Data point {} is not unique'.format(x))
print('Data point {} is not unique'.format(x))
# Insert data into unique dictionary
self._cache[_hashable(x.ravel())] = target
self._params = np.concatenate([self._params, x.reshape(1, -1)])
self._target = np.concatenate([self._target, [target]])
def random_sample(self):
"""
Creates a random point within the bounds of the space.
"""
params = np.empty(self.dim)
for col, _bound in enumerate(self._bounds):
if _bound['_type'] == 'choice':
params[col] = parameter_expressions.choice(
_bound['_value'], self.random_state)
elif _bound['_type'] == 'randint':
params[col] = self.random_state.randint(
_bound['_value'][0], _bound['_value'][1], size=1)
elif _bound['_type'] == 'uniform':
params[col] = parameter_expressions.uniform(
_bound['_value'][0], _bound['_value'][1], self.random_state)
elif _bound['_type'] == 'quniform':
params[col] = parameter_expressions.quniform(
_bound['_value'][0], _bound['_value'][1], _bound['_value'][2], self.random_state)
elif _bound['_type'] == 'loguniform':
params[col] = parameter_expressions.loguniform(
_bound['_value'][0], _bound['_value'][1], self.random_state)
elif _bound['_type'] == 'qloguniform':
params[col] = parameter_expressions.qloguniform(
_bound['_value'][0], _bound['_value'][1], _bound['_value'][2], self.random_state)
return params
def max(self):
"""Get maximum target value found and corresponding parametes."""
try:
res = {
'target': self.target.max(),
'params': dict(
zip(self.keys, self.params[self.target.argmax()])
)
}
except ValueError:
res = {}
return res
def res(self):
"""Get all target values found and corresponding parametes."""
params = [dict(zip(self.keys, p)) for p in self.params]
return [
{"target": target, "params": param}
for target, param in zip(self.target, params)
]
# Copyright (c) Microsoft Corporation
# All rights reserved.
#
# MIT License
#
# Permission is hereby granted, free of charge,
# to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and
# to permit persons to whom the Software is furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
'''
gp_tuner.py
'''
import warnings
import numpy as np
from scipy.stats import norm
from scipy.optimize import minimize
def _match_val_type(vals, bounds):
'''
Update values in the array, to match their corresponding type
'''
vals_new = []
for i, bound in enumerate(bounds):
_type = bound['_type']
if _type == "choice":
# Find the closest integer in the array, vals_bounds
vals_new.append(
min(bound['_value'], key=lambda x: abs(x - vals[i])))
elif _type in ['quniform', 'randint']:
vals_new.append(np.around(vals[i]))
else:
vals_new.append(vals[i])
return vals_new
def acq_max(f_acq, gp, y_max, bounds, space, num_warmup, num_starting_points):
"""
A function to find the maximum of the acquisition function
It uses a combination of random sampling (cheap) and the 'L-BFGS-B'
optimization method. First by sampling `n_warmup` (1e5) points at random,
and then running L-BFGS-B from `n_iter` (250) random starting points.
Parameters
----------
:param f_acq:
The acquisition function object that return its point-wise value.
:param gp:
A gaussian process fitted to the relevant data.
:param y_max:
The current maximum known value of the target function.
:param bounds:
The variables bounds to limit the search of the acq max.
:param num_warmup:
number of times to randomly sample the aquisition function
:param num_starting_points:
number of times to run scipy.minimize
Returns
-------
:return: x_max, The arg max of the acquisition function.
"""
# Warm up with random points
x_tries = [space.random_sample()
for _ in range(int(num_warmup))]
ys = f_acq(x_tries, gp=gp, y_max=y_max)
x_max = x_tries[ys.argmax()]
max_acq = ys.max()
# Explore the parameter space more throughly
x_seeds = [space.random_sample() for _ in range(int(num_starting_points))]
bounds_minmax = np.array(
[[bound['_value'][0], bound['_value'][-1]] for bound in bounds])
for x_try in x_seeds:
# Find the minimum of minus the acquisition function
res = minimize(lambda x: -f_acq(x.reshape(1, -1), gp=gp, y_max=y_max),
x_try.reshape(1, -1),
bounds=bounds_minmax,
method="L-BFGS-B")
# See if success
if not res.success:
continue
# Store it if better than previous minimum(maximum).
if max_acq is None or -res.fun[0] >= max_acq:
x_max = _match_val_type(res.x, bounds)
max_acq = -res.fun[0]
# Clip output to make sure it lies within the bounds. Due to floating
# point technicalities this is not always the case.
return np.clip(x_max, bounds_minmax[:, 0], bounds_minmax[:, 1])
class UtilityFunction():
"""
An object to compute the acquisition functions.
"""
def __init__(self, kind, kappa, xi):
"""
If UCB is to be used, a constant kappa is needed.
"""
self.kappa = kappa
self.xi = xi
if kind not in ['ucb', 'ei', 'poi']:
err = "The utility function " \
"{} has not been implemented, " \
"please choose one of ucb, ei, or poi.".format(kind)
raise NotImplementedError(err)
self.kind = kind
def utility(self, x, gp, y_max):
'''return utility function'''
if self.kind == 'ucb':
return self._ucb(x, gp, self.kappa)
if self.kind == 'ei':
return self._ei(x, gp, y_max, self.xi)
if self.kind == 'poi':
return self._poi(x, gp, y_max, self.xi)
return None
@staticmethod
def _ucb(x, gp, kappa):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
mean, std = gp.predict(x, return_std=True)
return mean + kappa * std
@staticmethod
def _ei(x, gp, y_max, xi):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
mean, std = gp.predict(x, return_std=True)
z = (mean - y_max - xi)/std
return (mean - y_max - xi) * norm.cdf(z) + std * norm.pdf(z)
@staticmethod
def _poi(x, gp, y_max, xi):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
mean, std = gp.predict(x, return_std=True)
z = (mean - y_max - xi)/std
return norm.cdf(z)
authorName: nni
experimentName: default_test
maxExecDuration: 5m
maxTrialNum: 2
trialConcurrency: 1
searchSpacePath: search_space.json
tuner:
builtinTunerName: GPTuner
classArgs:
optimize_mode: maximize
assessor:
builtinAssessorName: Medianstop
classArgs:
optimize_mode: maximize
kappa: 5
xi: 0
nu: 2.5
alpha: 1e-6
cold_start_num: 10
selection_num_warm_up: 100000
selection_num_starting_points: 250
trial:
codeDir: ../../../examples/trials/mnist
command: python3 mnist.py --batch_num 100
gpuNum: 0
useAnnotation: false
multiPhase: false
multiThread: false
trainingServicePlatform: local
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment