SearchSpaceSpec.md 7.57 KB
Newer Older
Yan Ni's avatar
Yan Ni committed
1
# Search Space
2

Yan Ni's avatar
Yan Ni committed
3
## Overview
4

Yan Ni's avatar
Yan Ni committed
5
6
7
8
In NNI, tuner will sample parameters/architecture according to the search space, which is defined as a json file.

To define a search space, users should define the name of variable, the type of sampling strategy and its parameters.

Lee's avatar
Lee committed
9
* An example of search space definition as follow:
10

11
```yaml
12
{
13
14
15
16
17
    "dropout_rate": {"_type": "uniform", "_value": [0.1, 0.5]},
    "conv_size": {"_type": "choice", "_value": [2, 3, 5, 7]},
    "hidden_size": {"_type": "choice", "_value": [124, 512, 1024]},
    "batch_size": {"_type": "choice", "_value": [50, 250, 500]},
    "learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]}
18
19
20
21
}

```

22
Take the first line as an example. `dropout_rate` is defined as a variable whose priori distribution is a uniform distribution of a range from `0.1` and `0.5`.
Yan Ni's avatar
Yan Ni committed
23

24
25
Note that the ability of a search space is highly connected with your tuner. We listed the supported types for each builtin tuner below. For a customized tuner, you don't have to follow our convention and you will have the flexibility to define any type you want.

Yan Ni's avatar
Yan Ni committed
26
27
28
## Types

All types of sampling strategies and their parameter are listed here:
29

30
* `{"_type": "choice", "_value": options}`
Lee's avatar
Lee committed
31

Yuge Zhang's avatar
Yuge Zhang committed
32
33
  * Which means the variable's value is one of the options. Here `options` should be a list of numbers or a list of strings. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors.
  * `options` could also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space could be seen as conditional variables. Here is an simple [example of nested search space definition](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/search_space.json). If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a key `_name` in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/sample.json) which users can get from nni with nested search space definition. Tuners which support nested search space are as follows:
Lee's avatar
Lee committed
34
35
36
37
38

    - Random Search 
    - TPE
    - Anneal
    - Evolution
xuehui's avatar
xuehui committed
39

40
41
42
43
44
* `{"_type": "randint", "_value": [lower, upper]}`
  * Choosing a random integer from `lower` (inclusive) to `upper` (exclusive).
  * Note: Different tuners may interpret `randint` differently. Some (e.g., TPE, GridSearch) treat integers from lower 
    to upper as unordered ones, while others respect the ordering (e.g., SMAC). If you want all the tuners to respect 
    the ordering, please use `quniform` with `q=1`.
Lee's avatar
Lee committed
45

46
* `{"_type": "uniform", "_value": [low, high]}`
47
48
  * Which means the variable value is a value uniformly between low and high.
  * When optimizing, this variable is constrained to a two-sided interval.
xuehui's avatar
xuehui committed
49

50
51
* `{"_type": "quniform", "_value": [low, high, q]}`
  * Which means the variable value is a value like `clip(round(uniform(low, high) / q) * q, low, high)`, where the clip operation is used to constraint the generated value in the bound. For example, for `_value` specified as [0, 10, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0]; For `_value` specified as [2, 10, 5], possible values are [2, 5, 10].
52
  * Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`.
xuehui's avatar
xuehui committed
53

54
* `{"_type": "loguniform", "_value": [low, high]}`
55
56
  * Which means the variable value is a value drawn from a range [low, high] according to a loguniform distribution like exp(uniform(log(low), log(high))), so that the logarithm of the return value is uniformly distributed.
  * When optimizing, this variable is constrained to be positive.
xuehui's avatar
xuehui committed
57

58
59
* `{"_type": "qloguniform", "_value": [low, high, q]}`
  * Which means the variable value is a value like `clip(round(loguniform(low, high) / q) * q, low, high)`, where the clip operation is used to constraint the generated value in the bound.
60
  * Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
xuehui's avatar
xuehui committed
61

62
* `{"_type": "normal", "_value": [mu, sigma]}`
63
  * Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.
xuehui's avatar
xuehui committed
64

65
66
* `{"_type": "qnormal", "_value": [mu, sigma, q]}`
  * Which means the variable value is a value like `round(normal(mu, sigma) / q) * q`
67
  * Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
xuehui's avatar
xuehui committed
68

69
70
* `{"_type": "lognormal", "_value": [mu, sigma]}`
  * Which means the variable value is a value drawn according to `exp(normal(mu, sigma))` so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
xuehui's avatar
xuehui committed
71

72
73
* `{"_type": "qlognormal", "_value": [mu, sigma, q]}`
  * Which means the variable value is a value like `round(exp(normal(mu, sigma)) / q) * q`
74
  * Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.
75

Zejun Lin's avatar
Zejun Lin committed
76

Yan Ni's avatar
Yan Ni committed
77
78
79
80
81
82
83
84
85
86
## Search Space Types Supported by Each Tuner

|                   | choice  | randint | uniform | quniform | loguniform | qloguniform | normal  | qnormal | lognormal | qlognormal |
|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| TPE Tuner         | ✓ | ✓ | ✓ | ✓  | ✓    | ✓     | ✓ | ✓ | ✓   | ✓    |
| Random Search Tuner| ✓ | ✓ | ✓ | ✓  | ✓    | ✓     | ✓ | ✓ | ✓   | ✓    |
| Anneal Tuner   | ✓ | ✓ | ✓ | ✓  | ✓    | ✓     | ✓ | ✓ | ✓   | ✓    |
| Evolution Tuner   | ✓ | ✓ | ✓ | ✓  | ✓    | ✓     | ✓ | ✓ | ✓   | ✓    |
| SMAC Tuner        | ✓ | ✓ | ✓ | ✓  | ✓    |      |  |  |    |     |
| Batch Tuner       | ✓ |  |  |   |     |      |  |  |    |     |
87
| Grid Search Tuner | ✓ | ✓ |  | ✓ |     | |  |  |    |     |
Yan Ni's avatar
Yan Ni committed
88
89
| Hyperband Advisor | ✓ | ✓ | ✓ | ✓  | ✓    | ✓     | ✓ | ✓ | ✓   | ✓    |
| Metis Tuner   | ✓ | ✓ | ✓ | ✓  |     |      |  |  |    |     |
90
| GP Tuner   | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |  |  |  |  |
Yan Ni's avatar
Yan Ni committed
91
92


93
Known Limitations:
Yan Ni's avatar
Yan Ni committed
94

95
* GP Tuner and Metis Tuner support only **numerical values** in search space (`choice` type values can be no-numeraical with other tuners, e.g. string values). Both GP Tuner and Metis Tuner use Gaussian Process Regressor(GPR). GPR make predictions based on a kernel function and the 'distance' between different points, it's hard to get the true distance between no-numerical values.
96
97
98
99
100

* Note that for nested search space:

    * Only Random Search/TPE/Anneal/Evolution tuner supports nested search space

101
    * We do not support nested search space "Hyper Parameter" in visualization now, the enhancement is being considered in [#1110](https://github.com/microsoft/nni/issues/1110), any suggestions or discussions or contributions are warmly welcomed