"git@developer.sourcefind.cn:OpenDAS/megatron-lm.git" did not exist on "82b69e86349368131647d825f243b6ccd2a92c04"
Commit 17be7967 authored by suiguoxin's avatar suiguoxin
Browse files

add optimal choices; support randint&quniform type; add doc

parent e35fa2b4
...@@ -19,7 +19,7 @@ Currently we support the following algorithms: ...@@ -19,7 +19,7 @@ Currently we support the following algorithms:
|[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)| |[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)|
|[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)| |[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)|
|[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)| |[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)|
|[__GP Tuner__](#GPTuner)|GP Tuner with Mater 5/2 kernel [Reference Paper](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)|
<br> <br>
## Usage of Builtin Tuners ## Usage of Builtin Tuners
...@@ -366,3 +366,45 @@ advisor: ...@@ -366,3 +366,45 @@ advisor:
max_budget: 27 max_budget: 27
eta: 3 eta: 3
``` ```
<br>
<a name="GPTuner"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `GP Tuner`
> Builtin Tuner Name: **GPTuner**
Note that the only acceptable types of search space are `choice`, `quniform`, `uniform` and `randint`.
**Suggested scenario**
GP Tuner is a black-box tuner. [Detailed Description](./GPTuner.md)
**Requirement of classArg**
* **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.
* **utility** (*'ei', 'ucb' or 'poi', optional, default = 'ei'*) - The kind of utility function.
* **kappa** (*float, optional, default = 5*) - Used for utility function 'ucb'. The bigger kappa is, the more the tunre will prefer exploration.
* **xi** (*float, optional, default = 0*) - Used for utility function 'ucb' and 'poi'. The bigger xi is, the more the tunre will prefer exploration.
* **nu** (*float, optional, default = 2.5*) - Used for specify Matern 5/2 kernel.
* **alpha** (*float, optional, default = 1e-6*) - Used for specify Gaussian Process Regressor.
* **cold_start_num** (*int, optional, default = 10*) - Number of random exploration to perform before Gaussian Process. Random exploration can help by diversifying the exploration space.
* **selection_num_warm_up** (*int, optional, default = 1e5*) - Number of times to randomly sample the aquisition function. It uses a combination of random sampling (cheap) and the 'L-BFGS-B'optimization method. First by sampling `selection_num_warm_up` (1e5) points at random, and then running L-BFGS-B from `selection_num_starting_points` (250) random starting points.
* **selection_num_starting_points** (*int, optional, default = 250*) - Nnumber of times to run scipy.minimize
**Usage example**
```yaml
# config.yml
tuner:
builtinTunerName: GPTuner
classArgs:
optimize_mode: maximize
kappa: 5
xi:0
nu:2.5
alpha:1e-6
cold_start_num:10
selection_num_warm_up:1e5
selection_num_starting_points:250
```
GP Tuner on NNI
===
## GP Tuner
Bayesian optimization works by constructing a posterior distribution of functions (Gaussian Process here) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not.
This process is designed to minimize/maximize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor.
Note that the only acceptable types of search space are `choice`, `quniform`, `uniform` and `randint`.
...@@ -85,6 +85,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -85,6 +85,7 @@ All types of sampling strategies and their parameter are listed here:
| Grid Search Tuner | &#10003; | | | &#10003; | | &#10003; | | | | | | Grid Search Tuner | &#10003; | | | &#10003; | | &#10003; | | | | |
| Hyperband Advisor | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | | Hyperband Advisor | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| Metis Tuner | &#10003; | &#10003; | &#10003; | &#10003; | | | | | | | | Metis Tuner | &#10003; | &#10003; | &#10003; | &#10003; | | | | | | |
| GP Tuner | &#10003; | &#10003; | &#10003; | &#10003; | | | | | | |
Known Limitations: Known Limitations:
......
...@@ -11,10 +11,11 @@ useAnnotation: false ...@@ -11,10 +11,11 @@ useAnnotation: false
tuner: tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#SMAC (SMAC should be installed through nnictl) #SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE builtinTunerName: GPTuner
classArgs: classArgs:
#choice: maximize, minimize #choice: maximize, minimize
optimize_mode: maximize optimize_mode: maximize
cold_start_num: 1
trial: trial:
command: python3 mnist.py command: python3 mnist.py
codeDir: . codeDir: .
......
...@@ -23,6 +23,7 @@ gp_tuner.py ...@@ -23,6 +23,7 @@ gp_tuner.py
import warnings import warnings
import logging import logging
import numpy as np
from sklearn.gaussian_process.kernels import Matern from sklearn.gaussian_process.kernels import Matern
from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process import GaussianProcessRegressor
...@@ -31,7 +32,7 @@ from nni.tuner import Tuner ...@@ -31,7 +32,7 @@ from nni.tuner import Tuner
from nni.utils import OptimizeMode, extract_scalar_reward from nni.utils import OptimizeMode, extract_scalar_reward
from .target_space import TargetSpace from .target_space import TargetSpace
from .util import UtilityFunction, acq_max, ensure_rng from .util import UtilityFunction, acq_max
logger = logging.getLogger("GP_Tuner_AutoML") logger = logging.getLogger("GP_Tuner_AutoML")
...@@ -41,20 +42,36 @@ class GPTuner(Tuner): ...@@ -41,20 +42,36 @@ class GPTuner(Tuner):
GPTuner GPTuner
''' '''
def __init__(self, optimize_mode="maximize", cold_start_num=1, random_state=None): def __init__(self, optimize_mode="maximize", utility_kind='ei', kappa=5, xi=0, nu=2.5, alpha=1e-6, cold_start_num=10,
selection_num_warm_up=1e5, selection_num_starting_points=250):
self.optimize_mode = optimize_mode self.optimize_mode = optimize_mode
self._random_state = ensure_rng(random_state)
# utility function related
self.utility_kind = utility_kind
self.kappa = kappa
self.xi = xi
# target space
self._space = None self._space = None
self._random_state = np.random.RandomState()
# nu, alpha are GPR related params
self._gp = GaussianProcessRegressor( self._gp = GaussianProcessRegressor(
kernel=Matern(nu=2.5), kernel=Matern(nu=nu),
alpha=1e-6, alpha=alpha,
normalize_y=True, normalize_y=True,
n_restarts_optimizer=25, n_restarts_optimizer=25,
random_state=self._random_state random_state=self._random_state
) )
# num of random evaluations before GPR
self._cold_start_num = cold_start_num
self.cold_start_num = cold_start_num # params for acq_max
self._selection_num_warm_up = selection_num_warm_up
self._selection_num_starting_points = selection_num_starting_points
# num of imported data
self.supplement_data_num = 0 self.supplement_data_num = 0
def update_search_space(self, search_space): def update_search_space(self, search_space):
...@@ -80,7 +97,7 @@ class GPTuner(Tuner): ...@@ -80,7 +97,7 @@ class GPTuner(Tuner):
------- -------
result : dict result : dict
""" """
if len(self._space) == 0 or len(self._space._target) < self.cold_start_num: if len(self._space) == 0 or len(self._space._target) < self._cold_start_num:
results = self._space.random_sample() results = self._space.random_sample()
else: else:
# Sklearn's GP throws a large number of warnings at times, but # Sklearn's GP throws a large number of warnings at times, but
...@@ -89,18 +106,21 @@ class GPTuner(Tuner): ...@@ -89,18 +106,21 @@ class GPTuner(Tuner):
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
self._gp.fit(self._space.params, self._space.target) self._gp.fit(self._space.params, self._space.target)
util = UtilityFunction(kind='ei', kappa=0, xi=0) util = UtilityFunction(
kind=self.utility_kind, kappa=self.kappa, xi=self.xi)
results = acq_max( results = acq_max(
ac=util.utility, ac=util.utility,
gp=self._gp, gp=self._gp,
y_max=self._space.target.max(), y_max=self._space.target.max(),
bounds=self._space.bounds, bounds=self._space.bounds,
random_state=self._random_state, space=self._space,
space=self._space n_warmup=self._selection_num_warm_up,
n_iter=self._selection_num_starting_points
) )
results = self._space.array_to_params(results) results = self._space.array_to_params(results)
logger.info("Generate paramageters(json):\n" + str(results)) logger.info("Generate paramageters:\n %s", results)
return results return results
def receive_trial_result(self, parameter_id, parameters, value): def receive_trial_result(self, parameter_id, parameters, value):
...@@ -113,9 +133,13 @@ class GPTuner(Tuner): ...@@ -113,9 +133,13 @@ class GPTuner(Tuner):
value : dict/float value : dict/float
if value is dict, it should have "default" key. if value is dict, it should have "default" key.
""" """
value = extract_scalar_reward(value)
if self.optimize_mode == OptimizeMode.Minimize:
value = -value
logger.info("Received trial result.") logger.info("Received trial result.")
logger.info("value is :" + str(value)) logger.info("value :%s", value)
logger.info("parameter is : " + str(parameters)) logger.info("parameter : %s", parameters)
self._space.register(parameters, value) self._space.register(parameters, value)
def import_data(self, data): def import_data(self, data):
......
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import numpy as np import numpy as np
from .util import ensure_rng import nni.parameter_expressions as parameter_expressions
def _hashable(x): def _hashable(x):
...@@ -30,17 +30,6 @@ def _hashable(x): ...@@ -30,17 +30,6 @@ def _hashable(x):
class TargetSpace(object): class TargetSpace(object):
""" """
Holds the param-space coordinates (X) and target values (Y) Holds the param-space coordinates (X) and target values (Y)
Allows for constant-time appends while ensuring no duplicates are added
Example
-------
>>> def target_func(p1, p2):
>>> return p1 + p2
>>> pbounds = {'p1': (0, 1), 'p2': (1, 100)}
>>> space = TargetSpace(target_func, pbounds, random_state=0)
>>> x = space.random_points(1)[0]
>>> y = space.register_point(x)
>>> assert self.max_point()['max_val'] == y
""" """
def __init__(self, pbounds, random_state=None): def __init__(self, pbounds, random_state=None):
...@@ -54,7 +43,7 @@ class TargetSpace(object): ...@@ -54,7 +43,7 @@ class TargetSpace(object):
random_state : int, RandomState, or None random_state : int, RandomState, or None
optionally specify a seed for a random number generator optionally specify a seed for a random number generator
""" """
self.random_state = ensure_rng(random_state) self.random_state = random_state
# Get the name of the parameters # Get the name of the parameters
self._keys = sorted(pbounds) self._keys = sorted(pbounds)
...@@ -121,10 +110,11 @@ class TargetSpace(object): ...@@ -121,10 +110,11 @@ class TargetSpace(object):
) )
# maintain int type if the choices are int # maintain int type if the choices are int
# TODO: better implementation
params = {} params = {}
for i, _bound in enumerate(self._bounds): for i, _bound in enumerate(self._bounds):
if _bound['_type'] == "choice" and isinstance(_bound['_value'][0], int): if _bound['_type'] == 'choice' and all(isinstance(val, int) for val in _bound['_value']):
params.update({self.keys[i]: int(x[i])})
elif _bound['_type'] in ['randint', 'quniform']:
params.update({self.keys[i]: int(x[i])}) params.update({self.keys[i]: int(x[i])})
else: else:
params.update({self.keys[i]: x[i]}) params.update({self.keys[i]: x[i]})
...@@ -164,26 +154,24 @@ class TargetSpace(object): ...@@ -164,26 +154,24 @@ class TargetSpace(object):
Returns Returns
---------- ----------
data: ndarray params: ndarray
[num x dim] array points with dimensions corresponding to `self._keys` [num x dim] array points with dimensions corresponding to `self._keys`
Example
-------
>>> target_func = lambda p1, p2: p1 + p2
>>> pbounds = { "dropout_rate":{"_type":"uniform","_value":[0.5, 0.9]}, "conv_size":{"_type":"choice","_value":[2,3,5,7]}}
>>> space = TargetSpace(pbounds, random_state=0)
>>> space.random_points()
array([[ 55.33253689, 0.54488318]])
""" """
# TODO: support randint, quniform params = np.empty(self.dim)
data = np.empty((1, self.dim))
for col, _bound in enumerate(self._bounds): for col, _bound in enumerate(self._bounds):
if _bound['_type'] == 'uniform': if _bound['_type'] == 'uniform':
data.T[col] = self.random_state.uniform( params[col] = parameter_expressions.uniform(
_bound['_value'][0], _bound['_value'][1], self.random_state)
elif _bound['_type'] == 'quniform':
params[col] = parameter_expressions.quniform(
_bound['_value'][0], _bound['_value'][1], _bound['_value'][2], self.random_state)
elif _bound['_type'] == 'randint':
params[col] = self.random_state.randint(
_bound['_value'][0], _bound['_value'][1], size=1) _bound['_value'][0], _bound['_value'][1], size=1)
elif _bound['_type'] == 'choice': elif _bound['_type'] == 'choice':
data.T[col] = self.random_state.choice(_bound['_value']) params[col] = parameter_expressions.choice(
return data.ravel() _bound['_value'], self.random_state)
return params
def max(self): def max(self):
"""Get maximum target value found and corresponding parametes.""" """Get maximum target value found and corresponding parametes."""
......
...@@ -36,13 +36,15 @@ def _match_val_type(vals, bounds): ...@@ -36,13 +36,15 @@ def _match_val_type(vals, bounds):
# Find the closest integer in the array, vals_bounds # Find the closest integer in the array, vals_bounds
vals_new.append( vals_new.append(
min(bounds[i]['_value'], key=lambda x: abs(x - vals[i]))) min(bounds[i]['_value'], key=lambda x: abs(x - vals[i])))
elif _type in ['quniform', 'randint']:
vals_new.append(np.around(bounds[i]['_value']))
else: else:
vals_new.append(vals[i]) vals_new.append(vals[i])
return vals_new return vals_new
def acq_max(ac, gp, y_max, bounds, random_state, space, n_warmup=1000, n_iter=250): def acq_max(ac, gp, y_max, bounds, space, n_warmup=1e5, n_iter=250):
""" """
A function to find the maximum of the acquisition function A function to find the maximum of the acquisition function
...@@ -64,9 +66,6 @@ def acq_max(ac, gp, y_max, bounds, random_state, space, n_warmup=1000, n_iter=25 ...@@ -64,9 +66,6 @@ def acq_max(ac, gp, y_max, bounds, random_state, space, n_warmup=1000, n_iter=25
:param bounds: :param bounds:
The variables bounds to limit the search of the acq max. The variables bounds to limit the search of the acq max.
:param random_state:
instance of np.RandomState random number generator
:param n_warmup: :param n_warmup:
number of times to randomly sample the aquisition function number of times to randomly sample the aquisition function
...@@ -79,20 +78,20 @@ def acq_max(ac, gp, y_max, bounds, random_state, space, n_warmup=1000, n_iter=25 ...@@ -79,20 +78,20 @@ def acq_max(ac, gp, y_max, bounds, random_state, space, n_warmup=1000, n_iter=25
""" """
# Warm up with random points # Warm up with random points
x_tries = [space.random_sample() for _ in range(n_warmup)] x_tries=[space.random_sample() for _ in range(int(n_warmup)] # TODO why int here ?
ys = ac(x_tries, gp=gp, y_max=y_max) ys=ac(x_tries, gp=gp, y_max=y_max)
x_max = x_tries[ys.argmax()] x_max=x_tries[ys.argmax()]
max_acq = ys.max() max_acq=ys.max()
# Explore the parameter space more throughly # Explore the parameter space more throughly
x_seeds = [space.random_sample() for _ in range(n_iter)] x_seeds=[space.random_sample() for _ in range(n_iter)]
bounds_minmax = np.array( bounds_minmax=np.array(
[[bound['_value'][0], bound['_value'][-1]] for bound in bounds]) [[bound['_value'][0], bound['_value'][-1]] for bound in bounds])
for x_try in x_seeds: for x_try in x_seeds:
# Find the minimum of minus the acquisition function # Find the minimum of minus the acquisition function
res = minimize(lambda x: -ac(x.reshape(1, -1), gp=gp, y_max=y_max), res=minimize(lambda x: -ac(x.reshape(1, -1), gp=gp, y_max=y_max),
x_try.reshape(1, -1), x_try.reshape(1, -1),
bounds=bounds_minmax, bounds=bounds_minmax,
method="L-BFGS-B") method="L-BFGS-B")
...@@ -103,13 +102,12 @@ def acq_max(ac, gp, y_max, bounds, random_state, space, n_warmup=1000, n_iter=25 ...@@ -103,13 +102,12 @@ def acq_max(ac, gp, y_max, bounds, random_state, space, n_warmup=1000, n_iter=25
# Store it if better than previous minimum(maximum). # Store it if better than previous minimum(maximum).
if max_acq is None or -res.fun[0] >= max_acq: if max_acq is None or -res.fun[0] >= max_acq:
x_max = _match_val_type(res.x, bounds) x_max=_match_val_type(res.x, bounds)
max_acq = -res.fun[0] max_acq=-res.fun[0]
# Clip output to make sure it lies within the bounds. Due to floating # Clip output to make sure it lies within the bounds. Due to floating
# point technicalities this is not always the case. # point technicalities this is not always the case.
# return np.clip(x_max, bounds[:, 0], bounds[:, 1]) return np.clip(x_max, bounds_minmax[:, 0], bounds_minmax[:, 1])
return x_max
class UtilityFunction(object): class UtilityFunction(object):
...@@ -121,17 +119,17 @@ class UtilityFunction(object): ...@@ -121,17 +119,17 @@ class UtilityFunction(object):
""" """
If UCB is to be used, a constant kappa is needed. If UCB is to be used, a constant kappa is needed.
""" """
self.kappa = kappa self.kappa=kappa
self.xi = xi self.xi=xi
if kind not in ['ucb', 'ei', 'poi']: if kind not in ['ucb', 'ei', 'poi']:
err = "The utility function " \ err="The utility function " \
"{} has not been implemented, " \ "{} has not been implemented, " \
"please choose one of ucb, ei, or poi.".format(kind) "please choose one of ucb, ei, or poi.".format(kind)
raise NotImplementedError(err) raise NotImplementedError(err)
else: else:
self.kind = kind self.kind=kind
def utility(self, x, gp, y_max): def utility(self, x, gp, y_max):
if self.kind == 'ucb': if self.kind == 'ucb':
...@@ -145,7 +143,7 @@ class UtilityFunction(object): ...@@ -145,7 +143,7 @@ class UtilityFunction(object):
def _ucb(x, gp, kappa): def _ucb(x, gp, kappa):
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
mean, std = gp.predict(x, return_std=True) mean, std=gp.predict(x, return_std=True)
return mean + kappa * std return mean + kappa * std
...@@ -153,31 +151,16 @@ class UtilityFunction(object): ...@@ -153,31 +151,16 @@ class UtilityFunction(object):
def _ei(x, gp, y_max, xi): def _ei(x, gp, y_max, xi):
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
mean, std = gp.predict(x, return_std=True) mean, std=gp.predict(x, return_std=True)
z = (mean - y_max - xi)/std z=(mean - y_max - xi)/std
return (mean - y_max - xi) * norm.cdf(z) + std * norm.pdf(z) return (mean - y_max - xi) * norm.cdf(z) + std * norm.pdf(z)
@staticmethod @staticmethod
def _poi(x, gp, y_max, xi): def _poi(x, gp, y_max, xi):
with warnings.catch_warnings(): with warnings.catch_warnings():
warnings.simplefilter("ignore") warnings.simplefilter("ignore")
mean, std = gp.predict(x, return_std=True) mean, std=gp.predict(x, return_std=True)
z = (mean - y_max - xi)/std z=(mean - y_max - xi)/std
return norm.cdf(z) return norm.cdf(z)
def ensure_rng(random_state=None):
"""
Creates a random number generator based on an optional seed. This can be
an integer or another random state for a seeded rng, or None for an
unseeded rng.
"""
if random_state is None:
random_state = np.random.RandomState()
elif isinstance(random_state, int):
random_state = np.random.RandomState(random_state)
else:
assert isinstance(random_state, np.random.RandomState)
return random_state
...@@ -107,8 +107,14 @@ tuner_schema_dict = { ...@@ -107,8 +107,14 @@ tuner_schema_dict = {
'builtinTunerName': 'GPTuner', 'builtinTunerName': 'GPTuner',
'classArgs': { 'classArgs': {
Optional('optimize_mode'): setChoice('optimize_mode', 'maximize', 'minimize'), Optional('optimize_mode'): setChoice('optimize_mode', 'maximize', 'minimize'),
Optional('selection_num_starting_points'): setType('selection_num_starting_points', int), Optional('utility'): setChoice('utility', 'ei', 'ucb', 'poi'),
Optional('kappa'): setType('kappa', float),
Optional('xi'): setType('xi', float),
Optional('nu'): setType('nu', float),
Optional('alpha'): setType('alpha', float),
Optional('cold_start_num'): setType('cold_start_num', int), Optional('cold_start_num'): setType('cold_start_num', int),
Optional('selection_num_warm_up'): setType('selection_num_warm_up', int),
Optional('selection_num_starting_points'): setType('selection_num_starting_points', int),
}, },
Optional('gpuNum'): setNumberRange('gpuNum', int, 0, 99999), Optional('gpuNum'): setNumberRange('gpuNum', int, 0, 99999),
}, },
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment