Commit 485245a0 authored by Yuge Zhang's avatar Yuge Zhang Committed by Guoxin
Browse files

Implement lower bound and clarify docs for randint (#1435)

Implement lower bound and clarify docs for randint (#1435)
parent 0f22aaaf
...@@ -10,11 +10,11 @@ To define a search space, users should define the name of variable, the type of ...@@ -10,11 +10,11 @@ To define a search space, users should define the name of variable, the type of
```yaml ```yaml
{ {
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]}, "dropout_rate": {"_type": "uniform", "_value": [0.1, 0.5]},
"conv_size":{"_type":"choice","_value":[2,3,5,7]}, "conv_size": {"_type": "choice", "_value": [2, 3, 5, 7]},
"hidden_size":{"_type":"choice","_value":[124, 512, 1024]}, "hidden_size": {"_type": "choice", "_value": [124, 512, 1024]},
"batch_size":{"_type":"choice","_value":[50, 250, 500]}, "batch_size": {"_type": "choice", "_value": [50, 250, 500]},
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]} "learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]}
} }
``` ```
...@@ -25,55 +25,54 @@ Take the first line as an example. `dropout_rate` is defined as a variable whose ...@@ -25,55 +25,54 @@ Take the first line as an example. `dropout_rate` is defined as a variable whose
All types of sampling strategies and their parameter are listed here: All types of sampling strategies and their parameter are listed here:
* {"_type":"choice","_value":options} * `{"_type": "choice", "_value": options}`
* Which means the variable's value is one of the options. Here 'options' should be a list. Each element of options is a number of string. It could also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space could be seen as conditional variables. * Which means the variable's value is one of the options. Here 'options' should be a list. Each element of options is a number of string. It could also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space could be seen as conditional variables.
* An simple [example](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/search_space.json) of [nested] search space definition. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a key '_name' in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/sample.json) which users can get from nni with nested search space definition. Tuners which support nested search space is as follows: * An simple [example](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/search_space.json) of [nested] search space definition. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a key `_name` in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/sample.json) which users can get from nni with nested search space definition. Tuners which support nested search space is as follows:
- Random Search - Random Search
- TPE - TPE
- Anneal - Anneal
- Evolution - Evolution
* {"_type":"randint","_value":[lower, upper]} * `{"_type": "randint", "_value": [lower, upper]}`
* Choosing a random integer from `lower` (inclusive) to `upper` (exclusive).
* Note: Different tuners may interpret `randint` differently. Some (e.g., TPE, GridSearch) treat integers from lower
to upper as unordered ones, while others respect the ordering (e.g., SMAC). If you want all the tuners to respect
the ordering, please use `quniform` with `q=1`.
* For now, we implement the "randint" distribution with "quniform", which means the variable value is a value like round(uniform(lower, upper)). The type of chosen value is float. If you want to use integer value, please convert it explicitly. * `{"_type": "uniform", "_value": [low, high]}`
* {"_type":"uniform","_value":[low, high]}
* Which means the variable value is a value uniformly between low and high. * Which means the variable value is a value uniformly between low and high.
* When optimizing, this variable is constrained to a two-sided interval. * When optimizing, this variable is constrained to a two-sided interval.
* {"_type":"quniform","_value":[low, high, q]} * `{"_type": "quniform", "_value": [low, high, q]}`
* Which means the variable value is a value like clip(round(uniform(low, high) / q) * q, low, high), where the clip operation is used to constraint the generated value in the bound. For example, for _value specified as [0, 10, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0]; For _value specified as [2, 10, 5], possible values are [2, 5, 10]. * Which means the variable value is a value like `clip(round(uniform(low, high) / q) * q, low, high)`, where the clip operation is used to constraint the generated value in the bound. For example, for `_value` specified as [0, 10, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0]; For `_value` specified as [2, 10, 5], possible values are [2, 5, 10].
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`. * Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`.
* {"_type":"loguniform","_value":[low, high]} * `{"_type": "loguniform", "_value": [low, high]}`
* Which means the variable value is a value drawn from a range [low, high] according to a loguniform distribution like exp(uniform(log(low), log(high))), so that the logarithm of the return value is uniformly distributed. * Which means the variable value is a value drawn from a range [low, high] according to a loguniform distribution like exp(uniform(log(low), log(high))), so that the logarithm of the return value is uniformly distributed.
* When optimizing, this variable is constrained to be positive. * When optimizing, this variable is constrained to be positive.
* {"_type":"qloguniform","_value":[low, high, q]} * `{"_type": "qloguniform", "_value": [low, high, q]}`
* Which means the variable value is a value like clip(round(loguniform(low, high) / q) * q, low, high), where the clip operation is used to constraint the generated value in the bound. * Which means the variable value is a value like `clip(round(loguniform(low, high) / q) * q, low, high)`, where the clip operation is used to constraint the generated value in the bound.
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below. * Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
* {"_type":"normal","_value":[mu, sigma]} * `{"_type": "normal", "_value": [mu, sigma]}`
* Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable. * Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.
* {"_type":"qnormal","_value":[mu, sigma, q]} * `{"_type": "qnormal", "_value": [mu, sigma, q]}`
* Which means the variable value is a value like round(normal(mu, sigma) / q) * q * Which means the variable value is a value like `round(normal(mu, sigma) / q) * q`
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded. * Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
* {"_type":"lognormal","_value":[mu, sigma]} * `{"_type": "lognormal", "_value": [mu, sigma]}`
* Which means the variable value is a value drawn according to `exp(normal(mu, sigma))` so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
* Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
* {"_type":"qlognormal","_value":[mu, sigma, q]} * `{"_type": "qlognormal", "_value": [mu, sigma, q]}`
* Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q * Which means the variable value is a value like `round(exp(normal(mu, sigma)) / q) * q`
* Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side. * Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.
* {"_type":"mutable_layer","_value":{mutable_layer_infomation}} * `{"_type": "mutable_layer", "_value": {mutable_layer_infomation}}`
* Type for [Neural Architecture Search Space][1]. Value is also a dictionary, which contains key-value pairs representing respectively name and search space of each mutable_layer. * Type for [Neural Architecture Search Space][1]. Value is also a dictionary, which contains key-value pairs representing respectively name and search space of each mutable_layer.
* For now, users can only use this type of search space with annotation, which means that there is no need to define a json file for search space since it will be automatically generated according to the annotation in trial code. * For now, users can only use this type of search space with annotation, which means that there is no need to define a json file for search space since it will be automatically generated according to the annotation in trial code.
* For detailed usage, please refer to [General NAS Interfaces][1]. * For detailed usage, please refer to [General NAS Interfaces][1].
......
...@@ -31,7 +31,7 @@ import ConfigSpace.hyperparameters as CSH ...@@ -31,7 +31,7 @@ import ConfigSpace.hyperparameters as CSH
from nni.protocol import CommandType, send from nni.protocol import CommandType, send
from nni.msg_dispatcher_base import MsgDispatcherBase from nni.msg_dispatcher_base import MsgDispatcherBase
from nni.utils import OptimizeMode, MetricType, extract_scalar_reward, randint_to_quniform from nni.utils import OptimizeMode, MetricType, extract_scalar_reward
from nni.common import multi_phase_enabled from nni.common import multi_phase_enabled
from .config_generator import CG_BOHB from .config_generator import CG_BOHB
...@@ -467,7 +467,6 @@ class BOHB(MsgDispatcherBase): ...@@ -467,7 +467,6 @@ class BOHB(MsgDispatcherBase):
search space of this experiment search space of this experiment
""" """
search_space = data search_space = data
randint_to_quniform(search_space)
cs = CS.ConfigurationSpace() cs = CS.ConfigurationSpace()
for var in search_space: for var in search_space:
_type = str(search_space[var]["_type"]) _type = str(search_space[var]["_type"])
...@@ -476,7 +475,7 @@ class BOHB(MsgDispatcherBase): ...@@ -476,7 +475,7 @@ class BOHB(MsgDispatcherBase):
var, choices=search_space[var]["_value"])) var, choices=search_space[var]["_value"]))
elif _type == 'randint': elif _type == 'randint':
cs.add_hyperparameter(CSH.UniformIntegerHyperparameter( cs.add_hyperparameter(CSH.UniformIntegerHyperparameter(
var, lower=0, upper=search_space[var]["_value"][0])) var, lower=search_space[var]["_value"][0], upper=search_space[var]["_value"][1] - 1))
elif _type == 'uniform': elif _type == 'uniform':
cs.add_hyperparameter(CSH.UniformFloatHyperparameter( cs.add_hyperparameter(CSH.UniformFloatHyperparameter(
var, lower=search_space[var]["_value"][0], upper=search_space[var]["_value"][1])) var, lower=search_space[var]["_value"][0], upper=search_space[var]["_value"][1]))
......
...@@ -26,7 +26,7 @@ import random ...@@ -26,7 +26,7 @@ import random
import numpy as np import numpy as np
from nni.tuner import Tuner from nni.tuner import Tuner
from nni.utils import NodeType, OptimizeMode, extract_scalar_reward, split_index, randint_to_quniform from nni.utils import NodeType, OptimizeMode, extract_scalar_reward, split_index
import nni.parameter_expressions as parameter_expressions import nni.parameter_expressions as parameter_expressions
...@@ -175,7 +175,6 @@ class EvolutionTuner(Tuner): ...@@ -175,7 +175,6 @@ class EvolutionTuner(Tuner):
search_space : dict search_space : dict
""" """
self.searchspace_json = search_space self.searchspace_json = search_space
randint_to_quniform(self.searchspace_json)
self.space = json2space(self.searchspace_json) self.space = json2space(self.searchspace_json)
self.random_state = np.random.RandomState() self.random_state = np.random.RandomState()
......
...@@ -31,7 +31,7 @@ import json_tricks ...@@ -31,7 +31,7 @@ import json_tricks
from nni.protocol import CommandType, send from nni.protocol import CommandType, send
from nni.msg_dispatcher_base import MsgDispatcherBase from nni.msg_dispatcher_base import MsgDispatcherBase
from nni.common import init_logger, multi_phase_enabled from nni.common import init_logger, multi_phase_enabled
from nni.utils import NodeType, OptimizeMode, MetricType, extract_scalar_reward, randint_to_quniform from nni.utils import NodeType, OptimizeMode, MetricType, extract_scalar_reward
import nni.parameter_expressions as parameter_expressions import nni.parameter_expressions as parameter_expressions
_logger = logging.getLogger(__name__) _logger = logging.getLogger(__name__)
...@@ -358,7 +358,6 @@ class Hyperband(MsgDispatcherBase): ...@@ -358,7 +358,6 @@ class Hyperband(MsgDispatcherBase):
number of trial jobs number of trial jobs
""" """
self.searchspace_json = data self.searchspace_json = data
randint_to_quniform(self.searchspace_json)
self.random_state = np.random.RandomState() self.random_state = np.random.RandomState()
def _handle_trial_end(self, parameter_id): def _handle_trial_end(self, parameter_id):
......
...@@ -27,7 +27,7 @@ import logging ...@@ -27,7 +27,7 @@ import logging
import hyperopt as hp import hyperopt as hp
import numpy as np import numpy as np
from nni.tuner import Tuner from nni.tuner import Tuner
from nni.utils import NodeType, OptimizeMode, extract_scalar_reward, split_index, randint_to_quniform from nni.utils import NodeType, OptimizeMode, extract_scalar_reward, split_index
logger = logging.getLogger('hyperopt_AutoML') logger = logging.getLogger('hyperopt_AutoML')
...@@ -51,6 +51,8 @@ def json2space(in_x, name=NodeType.ROOT): ...@@ -51,6 +51,8 @@ def json2space(in_x, name=NodeType.ROOT):
_value = json2space(in_x[NodeType.VALUE], name=name) _value = json2space(in_x[NodeType.VALUE], name=name)
if _type == 'choice': if _type == 'choice':
out_y = eval('hp.hp.choice')(name, _value) out_y = eval('hp.hp.choice')(name, _value)
elif _type == 'randint':
out_y = hp.hp.randint(name, _value[1] - _value[0])
else: else:
if _type in ['loguniform', 'qloguniform']: if _type in ['loguniform', 'qloguniform']:
_value[:2] = np.log(_value[:2]) _value[:2] = np.log(_value[:2])
...@@ -93,6 +95,8 @@ def json2parameter(in_x, parameter, name=NodeType.ROOT): ...@@ -93,6 +95,8 @@ def json2parameter(in_x, parameter, name=NodeType.ROOT):
else: else:
if _type in ['quniform', 'qloguniform']: if _type in ['quniform', 'qloguniform']:
out_y = np.clip(parameter[name], in_x[NodeType.VALUE][0], in_x[NodeType.VALUE][1]) out_y = np.clip(parameter[name], in_x[NodeType.VALUE][0], in_x[NodeType.VALUE][1])
elif _type == 'randint':
out_y = parameter[name] + in_x[NodeType.VALUE][0]
else: else:
out_y = parameter[name] out_y = parameter[name]
else: else:
...@@ -247,7 +251,6 @@ class HyperoptTuner(Tuner): ...@@ -247,7 +251,6 @@ class HyperoptTuner(Tuner):
search_space : dict search_space : dict
""" """
self.json = search_space self.json = search_space
randint_to_quniform(self.json)
search_space_instance = json2space(self.json) search_space_instance = json2space(self.json)
rstate = np.random.RandomState() rstate = np.random.RandomState()
...@@ -279,7 +282,7 @@ class HyperoptTuner(Tuner): ...@@ -279,7 +282,7 @@ class HyperoptTuner(Tuner):
total_params = self.get_suggestion(random_search=False) total_params = self.get_suggestion(random_search=False)
# avoid generating same parameter with concurrent trials because hyperopt doesn't support parallel mode # avoid generating same parameter with concurrent trials because hyperopt doesn't support parallel mode
if total_params in self.total_data.values(): if total_params in self.total_data.values():
# but it can cause deplicate parameter rarely # but it can cause duplicate parameter rarely
total_params = self.get_suggestion(random_search=True) total_params = self.get_suggestion(random_search=True)
self.total_data[parameter_id] = total_params self.total_data[parameter_id] = total_params
......
...@@ -25,7 +25,7 @@ from unittest import TestCase, main ...@@ -25,7 +25,7 @@ from unittest import TestCase, main
import hyperopt as hp import hyperopt as hp
from nni.hyperopt_tuner.hyperopt_tuner import json2space, json2parameter, json2vals from nni.hyperopt_tuner.hyperopt_tuner import json2space, json2parameter, json2vals, HyperoptTuner
class HyperoptTunerTestCase(TestCase): class HyperoptTunerTestCase(TestCase):
...@@ -99,6 +99,29 @@ class HyperoptTunerTestCase(TestCase): ...@@ -99,6 +99,29 @@ class HyperoptTunerTestCase(TestCase):
self.assertEqual(out_y["root[optimizer]-choice"], 0) self.assertEqual(out_y["root[optimizer]-choice"], 0)
self.assertEqual(out_y["root[learning_rate]-choice"], 1) self.assertEqual(out_y["root[learning_rate]-choice"], 1)
def test_tuner_generate(self):
for algorithm in ["tpe", "random_search", "anneal"]:
tuner = HyperoptTuner(algorithm)
choice_list = ["a", "b", 1, 2]
tuner.update_search_space({
"a": {
"_type": "randint",
"_value": [1, 3]
},
"b": {
"_type": "choice",
"_value": choice_list
}
})
for k in range(30):
# sample multiple times
param = tuner.generate_parameters(k)
print(param)
self.assertIsInstance(param["a"], int)
self.assertGreaterEqual(param["a"], 1)
self.assertLessEqual(param["a"], 2)
self.assertIn(param["b"], choice_list)
if __name__ == '__main__': if __name__ == '__main__':
main() main()
...@@ -32,12 +32,14 @@ def choice(options, random_state): ...@@ -32,12 +32,14 @@ def choice(options, random_state):
return random_state.choice(options) return random_state.choice(options)
def randint(upper, random_state): def randint(lower, upper, random_state):
''' '''
Generate a random integer from `lower` (inclusive) to `upper` (exclusive).
lower: an int that represent an lower bound
upper: an int that represent an upper bound upper: an int that represent an upper bound
random_state: an object of numpy.random.RandomState random_state: an object of numpy.random.RandomState
''' '''
return random_state.randint(upper) return random_state.randint(lower, upper)
def uniform(low, high, random_state): def uniform(low, high, random_state):
......
...@@ -88,10 +88,10 @@ def generate_pcs(nni_search_space_content): ...@@ -88,10 +88,10 @@ def generate_pcs(nni_search_space_content):
raise RuntimeError('%s has already existed, please make sure search space has no duplicate key.' % key) raise RuntimeError('%s has already existed, please make sure search space has no duplicate key.' % key)
categorical_dict[key] = search_space[key]['_value'] categorical_dict[key] = search_space[key]['_value']
elif search_space[key]['_type'] == 'randint': elif search_space[key]['_type'] == 'randint':
# TODO: support lower bound in randint pcs_fd.write('%s integer [%d, %d] [%d]\n' % (
pcs_fd.write('%s integer [0, %d] [%d]\n' % (
key, key,
search_space[key]['_value'][0], search_space[key]['_value'][0],
search_space[key]['_value'][1] - 1,
search_space[key]['_value'][0])) search_space[key]['_value'][0]))
elif search_space[key]['_type'] == 'uniform': elif search_space[key]['_type'] == 'uniform':
pcs_fd.write('%s real %s [%s]\n' % ( pcs_fd.write('%s real %s [%s]\n' % (
......
...@@ -38,7 +38,7 @@ from ConfigSpaceNNI import Configuration ...@@ -38,7 +38,7 @@ from ConfigSpaceNNI import Configuration
from .convert_ss_to_scenario import generate_scenario from .convert_ss_to_scenario import generate_scenario
from nni.tuner import Tuner from nni.tuner import Tuner
from nni.utils import OptimizeMode, extract_scalar_reward, randint_to_quniform from nni.utils import OptimizeMode, extract_scalar_reward
class SMACTuner(Tuner): class SMACTuner(Tuner):
...@@ -139,7 +139,6 @@ class SMACTuner(Tuner): ...@@ -139,7 +139,6 @@ class SMACTuner(Tuner):
search_space: search_space:
search space search space
""" """
randint_to_quniform(search_space)
if not self.update_ss_done: if not self.update_ss_done:
self.categorical_dict = generate_scenario(search_space) self.categorical_dict = generate_scenario(search_space)
if self.categorical_dict is None: if self.categorical_dict is None:
......
...@@ -19,11 +19,11 @@ ...@@ -19,11 +19,11 @@
# ================================================================================================== # ==================================================================================================
import random
import numpy as np import numpy as np
from .env_vars import trial_env_vars from .env_vars import trial_env_vars
from . import trial from . import trial
from . import parameter_expressions as param_exp
from .nas_utils import classic_mode, enas_mode, oneshot_mode, darts_mode from .nas_utils import classic_mode, enas_mode, oneshot_mode, darts_mode
...@@ -47,39 +47,39 @@ __all__ = [ ...@@ -47,39 +47,39 @@ __all__ = [
if trial_env_vars.NNI_PLATFORM is None: if trial_env_vars.NNI_PLATFORM is None:
def choice(*options, name=None): def choice(*options, name=None):
return random.choice(options) return param_exp.choice(options, np.random.RandomState())
def randint(upper, name=None): def randint(lower, upper, name=None):
return random.randrange(upper) return param_exp.randint(lower, upper, np.random.RandomState())
def uniform(low, high, name=None): def uniform(low, high, name=None):
return random.uniform(low, high) return param_exp.uniform(low, high, np.random.RandomState())
def quniform(low, high, q, name=None): def quniform(low, high, q, name=None):
assert high > low, 'Upper bound must be larger than lower bound' assert high > low, 'Upper bound must be larger than lower bound'
return np.clip(round(random.uniform(low, high) / q) * q, low, high) return param_exp.quniform(low, high, q, np.random.RandomState())
def loguniform(low, high, name=None): def loguniform(low, high, name=None):
assert low > 0, 'Lower bound must be positive' assert low > 0, 'Lower bound must be positive'
return np.exp(random.uniform(np.log(low), np.log(high))) return param_exp.loguniform(low, high, np.random.RandomState())
def qloguniform(low, high, q, name=None): def qloguniform(low, high, q, name=None):
return np.clip(round(loguniform(low, high) / q) * q, low, high) return param_exp.qloguniform(low, high, q, np.random.RandomState())
def normal(mu, sigma, name=None): def normal(mu, sigma, name=None):
return random.gauss(mu, sigma) return param_exp.normal(mu, sigma, np.random.RandomState())
def qnormal(mu, sigma, q, name=None): def qnormal(mu, sigma, q, name=None):
return round(random.gauss(mu, sigma) / q) * q return param_exp.qnormal(mu, sigma, q, np.random.RandomState())
def lognormal(mu, sigma, name=None): def lognormal(mu, sigma, name=None):
return np.exp(random.gauss(mu, sigma)) return param_exp.lognormal(mu, sigma, np.random.RandomState())
def qlognormal(mu, sigma, q, name=None): def qlognormal(mu, sigma, q, name=None):
return round(lognormal(mu, sigma) / q) * q return param_exp.qlognormal(mu, sigma, q, np.random.RandomState())
def function_choice(*funcs, name=None): def function_choice(*funcs, name=None):
return random.choice(funcs)() return param_exp.choice(funcs, np.random.RandomState())()
def mutable_layer(): def mutable_layer():
raise RuntimeError('Cannot call nni.mutable_layer in this mode') raise RuntimeError('Cannot call nni.mutable_layer in this mode')
...@@ -89,7 +89,7 @@ else: ...@@ -89,7 +89,7 @@ else:
def choice(options, name=None, key=None): def choice(options, name=None, key=None):
return options[_get_param(key)] return options[_get_param(key)]
def randint(upper, name=None, key=None): def randint(lower, upper, name=None, key=None):
return _get_param(key) return _get_param(key)
def uniform(low, high, name=None, key=None): def uniform(low, high, name=None, key=None):
......
...@@ -111,23 +111,3 @@ def init_dispatcher_logger(): ...@@ -111,23 +111,3 @@ def init_dispatcher_logger():
if dispatcher_env_vars.NNI_LOG_DIRECTORY is not None: if dispatcher_env_vars.NNI_LOG_DIRECTORY is not None:
logger_file_path = os.path.join(dispatcher_env_vars.NNI_LOG_DIRECTORY, logger_file_path) logger_file_path = os.path.join(dispatcher_env_vars.NNI_LOG_DIRECTORY, logger_file_path)
init_logger(logger_file_path, dispatcher_env_vars.NNI_LOG_LEVEL) init_logger(logger_file_path, dispatcher_env_vars.NNI_LOG_LEVEL)
def randint_to_quniform(in_x):
if isinstance(in_x, dict):
if NodeType.TYPE in in_x.keys():
if in_x[NodeType.TYPE] == 'randint':
value = in_x[NodeType.VALUE]
value.append(1)
in_x[NodeType.TYPE] = 'quniform'
in_x[NodeType.VALUE] = value
elif in_x[NodeType.TYPE] == 'choice':
randint_to_quniform(in_x[NodeType.VALUE])
else:
for key in in_x.keys():
randint_to_quniform(in_x[key])
elif isinstance(in_x, list):
for temp in in_x:
randint_to_quniform(temp)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment