Commit 3906b341 authored by suiguoxin's avatar suiguoxin
Browse files

refine doc and code

parent 17be7967
...@@ -19,7 +19,7 @@ Currently we support the following algorithms: ...@@ -19,7 +19,7 @@ Currently we support the following algorithms:
|[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)| |[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)|
|[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)| |[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)|
|[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)| |[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)|
|[__GP Tuner__](#GPTuner)|GP Tuner with Mater 5/2 kernel [Reference Paper](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)| |[__GP Tuner__](#GPTuner)|Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. [Reference Paper, ](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)[Github Repo](https://github.com/fmfn/BayesianOptimization)|
<br> <br>
## Usage of Builtin Tuners ## Usage of Builtin Tuners
...@@ -378,19 +378,19 @@ Note that the only acceptable types of search space are `choice`, `quniform`, `u ...@@ -378,19 +378,19 @@ Note that the only acceptable types of search space are `choice`, `quniform`, `u
**Suggested scenario** **Suggested scenario**
GP Tuner is a black-box tuner. [Detailed Description](./GPTuner.md) GP Tuner is uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore GP Tuner is most adequate for situations where sampling the function to be optimized is a very expensive endeavor. GP Tuner has a computationoal cost that grows at *O(N^3)* due to the requirement of inverting the Gram matrix. [Detailed Description](./GPTuner.md)
**Requirement of classArg** **Requirement of classArg**
* **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics. * **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.
* **utility** (*'ei', 'ucb' or 'poi', optional, default = 'ei'*) - The kind of utility function. * **utility** (*'ei', 'ucb' or 'poi', optional, default = 'ei'*) - The kind of utility function. 'ei', 'ucb' and 'poi' corresponds to 'Expected Improvement', 'Upper Confidence Bound' and 'Probability of Improvement' respectively.
* **kappa** (*float, optional, default = 5*) - Used for utility function 'ucb'. The bigger kappa is, the more the tunre will prefer exploration. * **kappa** (*float, optional, default = 5*) - Used by utility function 'ucb'. The bigger `kappa` is, the more the tuner will be exploratory.
* **xi** (*float, optional, default = 0*) - Used for utility function 'ucb' and 'poi'. The bigger xi is, the more the tunre will prefer exploration. * **xi** (*float, optional, default = 0*) - Used by utility function 'ei' and 'poi'. The bigger `xi` is, the more the tuner will be exploratory.
* **nu** (*float, optional, default = 2.5*) - Used for specify Matern 5/2 kernel. * **nu** (*float, optional, default = 2.5*) - Used to specify Matern kernel. The smaller nu, the less smooth the approximated function is.
* **alpha** (*float, optional, default = 1e-6*) - Used for specify Gaussian Process Regressor. * **alpha** (*float, optional, default = 1e-6*) - Used to specify Gaussian Process Regressor. Larger values correspond to increased noise level in the observations.
* **cold_start_num** (*int, optional, default = 10*) - Number of random exploration to perform before Gaussian Process. Random exploration can help by diversifying the exploration space. * **cold_start_num** (*int, optional, default = 10*) - Number of random exploration to perform before Gaussian Process. Random exploration can help by diversifying the exploration space.
* **selection_num_warm_up** (*int, optional, default = 1e5*) - Number of times to randomly sample the aquisition function. It uses a combination of random sampling (cheap) and the 'L-BFGS-B'optimization method. First by sampling `selection_num_warm_up` (1e5) points at random, and then running L-BFGS-B from `selection_num_starting_points` (250) random starting points. * **selection_num_warm_up** (*int, optional, default = 1e5*) - Number of random points to evaluate for getting the point which maximizes the acquisition function.
* **selection_num_starting_points** (*int, optional, default = 250*) - Nnumber of times to run scipy.minimize * **selection_num_starting_points** (*int, optional, default = 250*) - Nnumber of times to run L-BFGS-B from a random starting point after the warmup.
**Usage example** **Usage example**
...@@ -401,10 +401,10 @@ tuner: ...@@ -401,10 +401,10 @@ tuner:
classArgs: classArgs:
optimize_mode: maximize optimize_mode: maximize
kappa: 5 kappa: 5
xi:0 xi: 0
nu:2.5 nu: 2.5
alpha:1e-6 alpha: 1e-6
cold_start_num:10 cold_start_num: 10
selection_num_warm_up:1e5 selection_num_warm_up: 1e5
selection_num_starting_points:250 selection_num_starting_points: 250
``` ```
...@@ -5,6 +5,6 @@ GP Tuner on NNI ...@@ -5,6 +5,6 @@ GP Tuner on NNI
Bayesian optimization works by constructing a posterior distribution of functions (Gaussian Process here) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not. Bayesian optimization works by constructing a posterior distribution of functions (Gaussian Process here) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not.
This process is designed to minimize/maximize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor. GP Tuner is designed to minimize/maximize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor.
Note that the only acceptable types of search space are `choice`, `quniform`, `uniform` and `randint`. Note that the only acceptable types of search space are `choice`, `quniform`, `uniform` and `randint`.
...@@ -2,7 +2,7 @@ authorName: default ...@@ -2,7 +2,7 @@ authorName: default
experimentName: example_mnist experimentName: example_mnist
trialConcurrency: 1 trialConcurrency: 1
maxExecDuration: 1h maxExecDuration: 1h
maxTrialNum: 10 maxTrialNum: 20
#choice: local, remote, pai #choice: local, remote, pai
trainingServicePlatform: local trainingServicePlatform: local
searchSpacePath: search_space.json searchSpacePath: search_space.json
...@@ -15,7 +15,7 @@ tuner: ...@@ -15,7 +15,7 @@ tuner:
classArgs: classArgs:
#choice: maximize, minimize #choice: maximize, minimize
optimize_mode: maximize optimize_mode: maximize
cold_start_num: 1 cold_start_num: 3
trial: trial:
command: python3 mnist.py command: python3 mnist.py
codeDir: . codeDir: .
......
...@@ -97,7 +97,7 @@ class GPTuner(Tuner): ...@@ -97,7 +97,7 @@ class GPTuner(Tuner):
------- -------
result : dict result : dict
""" """
if len(self._space) == 0 or len(self._space._target) < self._cold_start_num: if self._space.len() < self._cold_start_num:
results = self._space.random_sample() results = self._space.random_sample()
else: else:
# Sklearn's GP throws a large number of warnings at times, but # Sklearn's GP throws a large number of warnings at times, but
...@@ -110,13 +110,13 @@ class GPTuner(Tuner): ...@@ -110,13 +110,13 @@ class GPTuner(Tuner):
kind=self.utility_kind, kappa=self.kappa, xi=self.xi) kind=self.utility_kind, kappa=self.kappa, xi=self.xi)
results = acq_max( results = acq_max(
ac=util.utility, f_acq=util.utility,
gp=self._gp, gp=self._gp,
y_max=self._space.target.max(), y_max=self._space.target.max(),
bounds=self._space.bounds, bounds=self._space.bounds,
space=self._space, space=self._space,
n_warmup=self._selection_num_warm_up, num_warmup=self._selection_num_warm_up,
n_iter=self._selection_num_starting_points num_starting_points=self._selection_num_starting_points
) )
results = self._space.array_to_params(results) results = self._space.array_to_params(results)
......
...@@ -17,17 +17,20 @@ ...@@ -17,17 +17,20 @@
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
'''
target_space.py
'''
import numpy as np import numpy as np
import nni.parameter_expressions as parameter_expressions import nni.parameter_expressions as parameter_expressions
def _hashable(x): def _hashable(params):
""" ensure that an point is hashable by a python dict """ """ ensure that an point is hashable by a python dict """
return tuple(map(float, x)) return tuple(map(float, params))
class TargetSpace(object): class TargetSpace():
""" """
Holds the param-space coordinates (X) and target values (Y) Holds the param-space coordinates (X) and target values (Y)
""" """
...@@ -59,38 +62,55 @@ class TargetSpace(object): ...@@ -59,38 +62,55 @@ class TargetSpace(object):
# keep track of unique points we have seen so far # keep track of unique points we have seen so far
self._cache = {} self._cache = {}
def __contains__(self, x): def __contains__(self, params):
return _hashable(x) in self._cache '''
check if a parameter is already registered
'''
return _hashable(params) in self._cache
def __len__(self): def len(self):
'''
length of registered params and targets
'''
assert len(self._params) == len(self._target) assert len(self._params) == len(self._target)
return len(self._target) return len(self._target)
@property
def empty(self):
return len(self) == 0
@property @property
def params(self): def params(self):
'''
params: numpy array
'''
return self._params return self._params
@property @property
def target(self): def target(self):
'''
target: numpy array
'''
return self._target return self._target
@property @property
def dim(self): def dim(self):
'''
dim: int
length of keys
'''
return len(self._keys) return len(self._keys)
@property @property
def keys(self): def keys(self):
'''
keys: numpy array
'''
return self._keys return self._keys
@property @property
def bounds(self): def bounds(self):
'''bounds'''
return self._bounds return self._bounds
def params_to_array(self, params): def params_to_array(self, params):
''' dict to array '''
try: try:
assert set(params) == set(self.keys) assert set(params) == set(self.keys)
except AssertionError: except AssertionError:
...@@ -101,15 +121,19 @@ class TargetSpace(object): ...@@ -101,15 +121,19 @@ class TargetSpace(object):
return np.asarray([params[key] for key in self.keys]) return np.asarray([params[key] for key in self.keys])
def array_to_params(self, x): def array_to_params(self, x):
'''
array to dict
maintain int type if the paramters is defined as int in search_space.json
'''
try: try:
assert len(x) == len(self.keys) assert len(x) == len(self.keys)
except AssertionError: except AssertionError:
raise ValueError( raise ValueError(
"Size of array ({}) is different than the ".format(len(x)) + "Size of array ({}) is different than the ".format(len(x)) +
"expected number of parameters ({}).".format(len(self.keys)) "expected number of parameters ({}).".format(self.dim())
) )
# maintain int type if the choices are int
params = {} params = {}
for i, _bound in enumerate(self._bounds): for i, _bound in enumerate(self._bounds):
if _bound['_type'] == 'choice' and all(isinstance(val, int) for val in _bound['_value']): if _bound['_type'] == 'choice' and all(isinstance(val, int) for val in _bound['_value']):
...@@ -131,16 +155,12 @@ class TargetSpace(object): ...@@ -131,16 +155,12 @@ class TargetSpace(object):
y : float y : float
target function value target function value
Raises
------
KeyError:
if the point is not unique
""" """
x = self.params_to_array(params) x = self.params_to_array(params)
if x in self: if x in self:
raise KeyError('Data point {} is not unique'.format(x)) #raise KeyError('Data point {} is not unique'.format(x))
print('Data point {} is not unique'.format(x))
# Insert data into unique dictionary # Insert data into unique dictionary
self._cache[_hashable(x.ravel())] = target self._cache[_hashable(x.ravel())] = target
...@@ -150,12 +170,8 @@ class TargetSpace(object): ...@@ -150,12 +170,8 @@ class TargetSpace(object):
def random_sample(self): def random_sample(self):
""" """
Creates random points within the bounds of the space. Creates a random point within the bounds of the space.
Returns
----------
params: ndarray
[num x dim] array points with dimensions corresponding to `self._keys`
""" """
params = np.empty(self.dim) params = np.empty(self.dim)
for col, _bound in enumerate(self._bounds): for col, _bound in enumerate(self._bounds):
...@@ -194,16 +210,3 @@ class TargetSpace(object): ...@@ -194,16 +210,3 @@ class TargetSpace(object):
{"target": target, "params": param} {"target": target, "params": param}
for target, param in zip(self.target, params) for target, param in zip(self.target, params)
] ]
def set_bounds(self, new_bounds):
"""
A method that allows changing the lower and upper searching bounds
Parameters
----------
new_bounds : dict
A dictionary with the parameter name and its new bounds
"""
for row, key in enumerate(self.keys):
if key in new_bounds:
self._bounds[row] = new_bounds[key]
...@@ -17,6 +17,9 @@ ...@@ -17,6 +17,9 @@
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
'''
gp_tuner.py
'''
import warnings import warnings
import numpy as np import numpy as np
...@@ -44,7 +47,7 @@ def _match_val_type(vals, bounds): ...@@ -44,7 +47,7 @@ def _match_val_type(vals, bounds):
return vals_new return vals_new
def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250): def acq_max(f_acq, gp, y_max, bounds, space, num_warmup, num_starting_points):
""" """
A function to find the maximum of the acquisition function A function to find the maximum of the acquisition function
...@@ -54,7 +57,7 @@ def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250): ...@@ -54,7 +57,7 @@ def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250):
Parameters Parameters
---------- ----------
:param ac: :param f_acq:
The acquisition function object that return its point-wise value. The acquisition function object that return its point-wise value.
:param gp: :param gp:
...@@ -66,10 +69,10 @@ def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250): ...@@ -66,10 +69,10 @@ def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250):
:param bounds: :param bounds:
The variables bounds to limit the search of the acq max. The variables bounds to limit the search of the acq max.
:param n_warmup: :param num_warmup:
number of times to randomly sample the aquisition function number of times to randomly sample the aquisition function
:param n_iter: :param num_starting_points:
number of times to run scipy.minimize number of times to run scipy.minimize
Returns Returns
...@@ -78,20 +81,21 @@ def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250): ...@@ -78,20 +81,21 @@ def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250):
""" """
# Warm up with random points # Warm up with random points
x_tries=[space.random_sample() for _ in range(int(n_warmup)] # TODO why int here ? x_tries = [space.random_sample()
ys=ac(x_tries, gp=gp, y_max=y_max) for _ in range(int(num_warmup))]
x_max=x_tries[ys.argmax()] ys = f_acq(x_tries, gp=gp, y_max=y_max)
max_acq=ys.max() x_max = x_tries[ys.argmax()]
max_acq = ys.max()
# Explore the parameter space more throughly # Explore the parameter space more throughly
x_seeds=[space.random_sample() for _ in range(n_iter)] x_seeds = [space.random_sample() for _ in range(num_starting_points)]
bounds_minmax=np.array( bounds_minmax = np.array(
[[bound['_value'][0], bound['_value'][-1]] for bound in bounds]) [[bound['_value'][0], bound['_value'][-1]] for bound in bounds])
for x_try in x_seeds: for x_try in x_seeds:
# Find the minimum of minus the acquisition function # Find the minimum of minus the acquisition function
res=minimize(lambda x: -ac(x.reshape(1, -1), gp=gp, y_max=y_max), res = minimize(lambda x: -f_acq(x.reshape(1, -1), gp=gp, y_max=y_max),
x_try.reshape(1, -1), x_try.reshape(1, -1),
bounds=bounds_minmax, bounds=bounds_minmax,
method="L-BFGS-B") method="L-BFGS-B")
...@@ -102,15 +106,15 @@ def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250): ...@@ -102,15 +106,15 @@ def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250):
# Store it if better than previous minimum(maximum). # Store it if better than previous minimum(maximum).
if max_acq is None or -res.fun[0] >= max_acq: if max_acq is None or -res.fun[0] >= max_acq:
x_max=_match_val_type(res.x, bounds) x_max = _match_val_type(res.x, bounds)
max_acq=-res.fun[0] max_acq = -res.fun[0]
# Clip output to make sure it lies within the bounds. Due to floating # Clip output to make sure it lies within the bounds. Due to floating
# point technicalities this is not always the case. # point technicalities this is not always the case.
return np.clip(x_max, bounds_minmax[:, 0], bounds_minmax[:, 1]) return np.clip(x_max, bounds_minmax[:, 0], bounds_minmax[:, 1])
class UtilityFunction(object): class UtilityFunction():
""" """
An object to compute the acquisition functions. An object to compute the acquisition functions.
""" """
...@@ -119,31 +123,32 @@ class UtilityFunction(object): ...@@ -119,31 +123,32 @@ class UtilityFunction(object):
""" """
If UCB is to be used, a constant kappa is needed. If UCB is to be used, a constant kappa is needed.
""" """
self.kappa=kappa self.kappa = kappa
self.xi=xi self.xi = xi
if kind not in ['ucb', 'ei', 'poi']: if kind not in ['ucb', 'ei', 'poi']:
err="The utility function " \ err = "The utility function " \
"{} has not been implemented, " \ "{} has not been implemented, " \
"please choose one of ucb, ei, or poi.".format(kind) "please choose one of ucb, ei, or poi.".format(kind)
raise NotImplementedError(err) raise NotImplementedError(err)
else: self.kind = kind
self.kind=kind
def utility(self, x, gp, y_max): def utility(self, x, gp, y_max):
'''return utility function'''
if self.kind == 'ucb': if self.kind == 'ucb':
return self._ucb(x, gp, self.kappa) return self._ucb(x, gp, self.kappa)
if self.kind == 'ei': if self.kind == 'ei':
return self._ei(x, gp, y_max, self.xi) return self._ei(x, gp, y_max, self.xi)
if self.kind == 'poi': if self.kind == 'poi':
return self._poi(x, gp, y_max, self.xi) return self._poi(x, gp, y_max, self.xi)
return None
@staticmethod @staticmethod
def _ucb(x, gp, kappa): def _ucb(x, gp, kappa):
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
mean, std=gp.predict(x, return_std=True) mean, std = gp.predict(x, return_std=True)
return mean + kappa * std return mean + kappa * std
...@@ -151,16 +156,16 @@ class UtilityFunction(object): ...@@ -151,16 +156,16 @@ class UtilityFunction(object):
def _ei(x, gp, y_max, xi): def _ei(x, gp, y_max, xi):
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
mean, std=gp.predict(x, return_std=True) mean, std = gp.predict(x, return_std=True)
z=(mean - y_max - xi)/std z = (mean - y_max - xi)/std
return (mean - y_max - xi) * norm.cdf(z) + std * norm.pdf(z) return (mean - y_max - xi) * norm.cdf(z) + std * norm.pdf(z)
@staticmethod @staticmethod
def _poi(x, gp, y_max, xi): def _poi(x, gp, y_max, xi):
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
mean, std=gp.predict(x, return_std=True) mean, std = gp.predict(x, return_std=True)
z=(mean - y_max - xi)/std z = (mean - y_max - xi)/std
return norm.cdf(z) return norm.cdf(z)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment