Unverified Commit 9fb25ccc authored by SparkSnail's avatar SparkSnail Committed by GitHub
Browse files

Merge pull request #189 from microsoft/master

merge master
parents 1500458a 7c4bc33b
...@@ -12,7 +12,7 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no ...@@ -12,7 +12,7 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no
## Setup NNI environment ## Setup NNI environment
Install NNI on each of your machines following the install guide [here](QuickStart.md). Install NNI on each of your machines following the install guide [here](../Tutorial/QuickStart.md).
## Run an experiment ## Run an experiment
......
...@@ -98,7 +98,7 @@ If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `baggi ...@@ -98,7 +98,7 @@ If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `baggi
} }
``` ```
More support variable type you could reference [here](SearchSpaceSpec.md). More support variable type you could reference [here](../Tutorial/SearchSpaceSpec.md).
### 3.3 Add SDK of nni into your code. ### 3.3 Add SDK of nni into your code.
......
# Scikit-learn in NNI # Scikit-learn in NNI
[Scikit-learn](https://github.com/scikit-learn/scikit-learn) is a pupular meachine learning tool for data mining and data analysis. It supports many kinds of meachine learning models like LinearRegression, LogisticRegression, DecisionTree, SVM etc. How to make the use of scikit-learn more efficiency is a valuable topic. [Scikit-learn](https://github.com/scikit-learn/scikit-learn) is a popular machine learning tool for data mining and data analysis. It supports many kinds of machine learning models like LinearRegression, LogisticRegression, DecisionTree, SVM etc. How to make the use of scikit-learn more efficiency is a valuable topic.
NNI supports many kinds of tuning algorithms to search the best models and/or hyper-parameters for scikit-learn, and support many kinds of environments like local machine, remote servers and cloud. NNI supports many kinds of tuning algorithms to search the best models and/or hyper-parameters for scikit-learn, and support many kinds of environments like local machine, remote servers and cloud.
## 1. How to run the example ## 1. How to run the example
To start using NNI, you should install the nni package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please [refer](QuickStart.md). To start using NNI, you should install the NNI package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please refer [here](../Tutorial/QuickStart.md).
After you installed NNI, you could enter the corresponding folder and start the experiment using following commands: After you installed NNI, you could enter the corresponding folder and start the experiment using following commands:
```bash ```bash
...@@ -17,16 +19,18 @@ nnictl create --config ./config.yml ...@@ -17,16 +19,18 @@ nnictl create --config ./config.yml
### 2.1 classification ### 2.1 classification
This example uses the dataset of digits, which is made up of 1797 8x8 images, and each image is a hand-written digit, the goal is to classify these images into 10 classes. This example uses the dataset of digits, which is made up of 1797 8x8 images, and each image is a hand-written digit, the goal is to classify these images into 10 classes.
In this example, we use SVC as the model, and choose some parameters of this model, including `"C", "keral", "degree", "gamma" and "coef0"`. For more information of these parameters, please [refer](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). In this example, we use SVC as the model, and choose some parameters of this model, including `"C", "keral", "degree", "gamma" and "coef0"`. For more information of these parameters, please [refer](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).
### 2.2 regression ### 2.2 regression
This example uses the Boston Housing Dataset, this dataset consists of price of houses in various places in Boston and the information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE) etc to predict the house price of boston. This example uses the Boston Housing Dataset, this dataset consists of price of houses in various places in Boston and the information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE) etc., to predict the house price of Boston.
In this example, we tune different kinds of regression models including `"LinearRegression", "SVR", "KNeighborsRegressor", "DecisionTreeRegressor"` and some parameters like `"svr_kernel", "knr_weights"`. You could get more details about these models from [here](https://scikit-learn.org/stable/supervised_learning.html#supervised-learning). In this example, we tune different kinds of regression models including `"LinearRegression", "SVR", "KNeighborsRegressor", "DecisionTreeRegressor"` and some parameters like `"svr_kernel", "knr_weights"`. You could get more details about these models from [here](https://scikit-learn.org/stable/supervised_learning.html#supervised-learning).
## 3. How to write sklearn code using nni ## 3. How to write scikit-learn code using NNI
It is easy to use nni in your sklearn code, there are only a few steps. It is easy to use NNI in your scikit-learn code, there are only a few steps.
* __step 1__ * __step 1__
...@@ -51,8 +55,10 @@ It is easy to use nni in your sklearn code, there are only a few steps. ...@@ -51,8 +55,10 @@ It is easy to use nni in your sklearn code, there are only a few steps.
Then you could read these values as a dict from your python code, please get into the step 2. Then you could read these values as a dict from your python code, please get into the step 2.
* __step 2__ * __step 2__
At the beginning of your python code, you should `import nni` to insure the packages works normally. At the beginning of your python code, you should `import nni` to insure the packages works normally.
First, you should use `nni.get_next_parameter()` function to get your parameters given by nni. Then you could use these parameters to update your code.
First, you should use `nni.get_next_parameter()` function to get your parameters given by NNI. Then you could use these parameters to update your code.
For example, if you define your search_space.json like following format: For example, if you define your search_space.json like following format:
```json ```json
...@@ -79,5 +85,7 @@ It is easy to use nni in your sklearn code, there are only a few steps. ...@@ -79,5 +85,7 @@ It is easy to use nni in your sklearn code, there are only a few steps.
Then you could use these variables to write your scikit-learn code. Then you could use these variables to write your scikit-learn code.
* __step 3__ * __step 3__
After you finished your training, you could get your own score of the model, like your percision, recall or MSE etc. NNI needs your score to tuner algorithms and generate next group of parameters, please report the score back to NNI and start next trial job.
You just need to use `nni.report_final_result(score)` to communitate with NNI after you process your scikit-learn code. Or if you have multiple scores in the steps of training, you could also report them back to NNI using `nni.report_intemediate_result(score)`. Note, you may not report intemediate result of your job, but you must report back your final result. After you finished your training, you could get your own score of the model, like your precision, recall or MSE etc. NNI needs your score to tuner algorithms and generate next group of parameters, please report the score back to NNI and start next trial job.
You just need to use `nni.report_final_result(score)` to communicate with NNI after you process your scikit-learn code. Or if you have multiple scores in the steps of training, you could also report them back to NNI using `nni.report_intemediate_result(score)`. Note, you may not report intermediate result of your job, but you must report back your final result.
...@@ -12,7 +12,7 @@ Since attention and RNN have been proven effective in Reading Comprehension, we ...@@ -12,7 +12,7 @@ Since attention and RNN have been proven effective in Reading Comprehension, we
6. ADD-SKIP (Identity between random layers). 6. ADD-SKIP (Identity between random layers).
7. REMOVE-SKIP (Removes random skip). 7. REMOVE-SKIP (Removes random skip).
![](../../examples/trials/ga_squad/ga_squad.png) ![](../../../examples/trials/ga_squad/ga_squad.png)
### New version ### New version
Also we have another version which time cost is less and performance is better. We will release soon. Also we have another version which time cost is less and performance is better. We will release soon.
......
...@@ -20,7 +20,7 @@ An example is shown below: ...@@ -20,7 +20,7 @@ An example is shown below:
} }
``` ```
Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search space. Tuner will generate configurations from this search space, that is, choosing a value for each hyperparameter from the range. Refer to [SearchSpaceSpec.md](../Tutorial/SearchSpaceSpec.md) to learn more about search space. Tuner will generate configurations from this search space, that is, choosing a value for each hyperparameter from the range.
### Step 2 - Update model codes ### Step 2 - Update model codes
...@@ -33,7 +33,9 @@ Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search s ...@@ -33,7 +33,9 @@ Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search s
```python ```python
RECEIVED_PARAMS = nni.get_next_parameter() RECEIVED_PARAMS = nni.get_next_parameter()
``` ```
`RECEIVED_PARAMS` is an object, for example: `RECEIVED_PARAMS` is an object, for example:
`{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}`. `{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}`.
- Report metric data periodically (optional) - Report metric data periodically (optional)
...@@ -41,14 +43,15 @@ RECEIVED_PARAMS = nni.get_next_parameter() ...@@ -41,14 +43,15 @@ RECEIVED_PARAMS = nni.get_next_parameter()
```python ```python
nni.report_intermediate_result(metrics) nni.report_intermediate_result(metrics)
``` ```
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](../Assessor/BuiltinAssessor.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
- Report performance of the configuration - Report performance of the configuration
```python ```python
nni.report_final_result(metrics) nni.report_final_result(metrics)
``` ```
`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](BuiltinTuner.md). `metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](../Tuner/BuiltinTuner.md).
### Step 3 - Enable NNI API ### Step 3 - Enable NNI API
...@@ -59,11 +62,10 @@ useAnnotation: false ...@@ -59,11 +62,10 @@ useAnnotation: false
searchSpacePath: /path/to/your/search_space.json searchSpacePath: /path/to/your/search_space.json
``` ```
You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations. You can refer to [here](../Tutorial/ExperimentConfig.md) for more information about how to set up experiment configurations.
*Please refer to [here](https://nni.readthedocs.io/en/latest/sdk_reference.html) for more APIs (e.g., `nni.get_sequence_id()`) provided by NNI. *Please refer to [here](https://nni.readthedocs.io/en/latest/sdk_reference.html) for more APIs (e.g., `nni.get_sequence_id()`) provided by NNI.
<a name="nni-annotation"></a> <a name="nni-annotation"></a>
## NNI Python Annotation ## NNI Python Annotation
...@@ -115,7 +117,7 @@ with tf.Session() as sess: ...@@ -115,7 +117,7 @@ with tf.Session() as sess:
- `@nni.variable` will take effect on its following line, which is an assignment statement whose leftvalue must be specified by the keyword `name` in `@nni.variable`. - `@nni.variable` will take effect on its following line, which is an assignment statement whose leftvalue must be specified by the keyword `name` in `@nni.variable`.
- `@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line. - `@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line.
For more information about annotation syntax and its usage, please refer to [Annotation](AnnotationSpec.md). For more information about annotation syntax and its usage, please refer to [Annotation](../Tutorial/AnnotationSpec.md).
### Step 2 - Enable NNI Annotation ### Step 2 - Enable NNI Annotation
...@@ -125,7 +127,6 @@ In the YAML configure file, you need to set *useAnnotation* to true to enable NN ...@@ -125,7 +127,6 @@ In the YAML configure file, you need to set *useAnnotation* to true to enable NN
useAnnotation: true useAnnotation: true
``` ```
## Where are my trials? ## Where are my trials?
### Local Mode ### Local Mode
...@@ -133,7 +134,8 @@ useAnnotation: true ...@@ -133,7 +134,8 @@ useAnnotation: true
In NNI, every trial has a dedicated directory for them to output their own data. In each trial, an environment variable called `NNI_OUTPUT_DIR` is exported. Under this directory, you could find each trial's code, data and other possible log. In addition, each trial's log (including stdout) will be re-directed to a file named `trial.log` under that directory. In NNI, every trial has a dedicated directory for them to output their own data. In each trial, an environment variable called `NNI_OUTPUT_DIR` is exported. Under this directory, you could find each trial's code, data and other possible log. In addition, each trial's log (including stdout) will be re-directed to a file named `trial.log` under that directory.
If NNI Annotation is used, trial's converted code is in another temporary directory. You can check that in a file named `run.sh` under the directory indicated by `NNI_OUTPUT_DIR`. The second line (i.e., the `cd` command) of this file will change directory to the actual directory where code is located. Below is an example of `run.sh`: If NNI Annotation is used, trial's converted code is in another temporary directory. You can check that in a file named `run.sh` under the directory indicated by `NNI_OUTPUT_DIR`. The second line (i.e., the `cd` command) of this file will change directory to the actual directory where code is located. Below is an example of `run.sh`:
```shell
```bash
#!/bin/bash #!/bin/bash
cd /tmp/user_name/nni/annotation/tmpzj0h72x6 #This is the actual directory cd /tmp/user_name/nni/annotation/tmpzj0h72x6 #This is the actual directory
export NNI_PLATFORM=local export NNI_PLATFORM=local
...@@ -149,9 +151,9 @@ echo $? `date +%s%3N` >/home/user_name/nni/experiments/$experiment_id$/trials/$t ...@@ -149,9 +151,9 @@ echo $? `date +%s%3N` >/home/user_name/nni/experiments/$experiment_id$/trials/$t
### Other Modes ### Other Modes
When runing trials on other platform like remote machine or PAI, the environment variable `NNI_OUTPUT_DIR` only refers to the output directory of the trial, while trial code and `run.sh` might not be there. However, the `trial.log` will be transmitted back to local machine in trial's directory, which defaults to `~/nni/experiments/$experiment_id$/trials/$trial_id$/` When running trials on other platform like remote machine or PAI, the environment variable `NNI_OUTPUT_DIR` only refers to the output directory of the trial, while trial code and `run.sh` might not be there. However, the `trial.log` will be transmitted back to local machine in trial's directory, which defaults to `~/nni/experiments/$experiment_id$/trials/$trial_id$/`
For more information, please refer to [HowToDebug](HowToDebug.md) For more information, please refer to [HowToDebug](../Tutorial/HowToDebug.md)
<a name="more-examples"></a> <a name="more-examples"></a>
## More Trial Examples ## More Trial Examples
......
...@@ -3,6 +3,6 @@ Batch Tuner on NNI ...@@ -3,6 +3,6 @@ Batch Tuner on NNI
## Batch Tuner ## Batch Tuner
Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in [search space spec](SearchSpaceSpec.md). Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in [search space spec](../Tutorial/SearchSpaceSpec.md).
Suggested scenario: If the configurations you want to try have been decided, you can list them in SearchSpace file (using choice) and run them using batch tuner. Suggested scenario: If the configurations you want to try have been decided, you can list them in SearchSpace file (using choice) and run them using batch tuner.
...@@ -12,7 +12,7 @@ Below we divide introduction of the BOHB process into two parts: ...@@ -12,7 +12,7 @@ Below we divide introduction of the BOHB process into two parts:
We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](HyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below. We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](HyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below.
![](../img/bohb_1.png) ![](../../img/bohb_1.png)
### BO (Bayesian Optimization) ### BO (Bayesian Optimization)
...@@ -20,11 +20,11 @@ The BO part of BOHB closely resembles TPE, with one major difference: we opted f ...@@ -20,11 +20,11 @@ The BO part of BOHB closely resembles TPE, with one major difference: we opted f
Tree Parzen Estimator(TPE): uses a KDE(kernel density estimator) to model the densities. Tree Parzen Estimator(TPE): uses a KDE(kernel density estimator) to model the densities.
![](../img/bohb_2.png) ![](../../img/bohb_2.png)
To fit useful KDEs, we require a minimum number of data points Nmin; this is set to d + 1 for our experiments, where d is the number of hyperparameters. To build a model as early as possible, we do not wait until Nb = |Db|, the number of observations for budget b, is large enough to satisfy q · Nb ≥ Nmin. Instead, after initializing with Nmin + 2 random configurations, we choose the To fit useful KDEs, we require a minimum number of data points Nmin; this is set to d + 1 for our experiments, where d is the number of hyperparameters. To build a model as early as possible, we do not wait until Nb = |Db|, the number of observations for budget b, is large enough to satisfy q · Nb ≥ Nmin. Instead, after initializing with Nmin + 2 random configurations, we choose the
![](../img/bohb_3.png) ![](../../img/bohb_3.png)
best and worst configurations, respectively, to model the two densities. best and worst configurations, respectively, to model the two densities.
...@@ -32,14 +32,14 @@ Note that we alse sample a constant fraction named **random fraction** of the co ...@@ -32,14 +32,14 @@ Note that we alse sample a constant fraction named **random fraction** of the co
## 2. Workflow ## 2. Workflow
![](../img/bohb_6.jpg) ![](../../img/bohb_6.jpg)
This image shows the workflow of BOHB. Here we set max_budget = 9, min_budget = 1, eta = 3, others as default. In this case, s_max = 2, so we will continuesly run the {s=2, s=1, s=0, s=2, s=1, s=0, ...} cycle. In each stage of SuccessiveHalving (the orange box), we will pick the top 1/eta configurations and run them again with more budget, repeated SuccessiveHalving stage until the end of this iteration. At the same time, we collect the configurations, budgets and final metrics of each trial, and use this to build a multidimensional KDEmodel with the key "budget". This image shows the workflow of BOHB. Here we set max_budget = 9, min_budget = 1, eta = 3, others as default. In this case, s_max = 2, so we will continuesly run the {s=2, s=1, s=0, s=2, s=1, s=0, ...} cycle. In each stage of SuccessiveHalving (the orange box), we will pick the top 1/eta configurations and run them again with more budget, repeated SuccessiveHalving stage until the end of this iteration. At the same time, we collect the configurations, budgets and final metrics of each trial, and use this to build a multidimensional KDEmodel with the key "budget".
Multidimensional KDE is used to guide the selection of configurations for the next iteration. Multidimensional KDE is used to guide the selection of configurations for the next iteration.
The way of sampling procedure(use Multidimensional KDE to guide the selection) is summarized by the pseudocode below. The way of sampling procedure(use Multidimensional KDE to guide the selection) is summarized by the pseudocode below.
![](../img/bohb_4.png) ![](../../img/bohb_4.png)
## 3. Usage ## 3. Usage
...@@ -51,7 +51,7 @@ nnictl package install --name=BOHB ...@@ -51,7 +51,7 @@ nnictl package install --name=BOHB
To use BOHB, you should add the following spec in your experiment's YAML config file: To use BOHB, you should add the following spec in your experiment's YAML config file:
```yml ```yaml
advisor: advisor:
builtinAdvisorName: BOHB builtinAdvisorName: BOHB
classArgs: classArgs:
...@@ -96,6 +96,6 @@ code implementation: [examples/trials/mnist-advisor](https://github.com/Microsof ...@@ -96,6 +96,6 @@ code implementation: [examples/trials/mnist-advisor](https://github.com/Microsof
We chose BOHB to build CNN on the MNIST dataset. The following is our experimental final results: We chose BOHB to build CNN on the MNIST dataset. The following is our experimental final results:
![](../img/bohb_5.png) ![](../../img/bohb_5.png)
More experimental result can be found in the [reference paper](https://arxiv.org/abs/1807.01774), we can see that BOHB makes good use of previous results, and has a balance trade-off in exploration and exploitation. More experimental result can be found in the [reference paper](https://arxiv.org/abs/1807.01774), we can see that BOHB makes good use of previous results, and has a balance trade-off in exploration and exploitation.
\ No newline at end of file
# Built-in Tuners # Built-in Tuners
NNI provides state-of-the-art tuning algorithm as our builtin-tuners and makes them easy to use. Below is the brief summary of NNI currently built-in Tuners: NNI provides state-of-the-art tuning algorithm as our built-in tuners and makes them easy to use. Below is the brief summary of NNI currently built-in tuners:
Note: Click the **Tuner's name** to get the Tuner's installation requirements, suggested scenario and using example. The link for a detailed description of the algorithm is at the end of the suggested scenario of each tuner. Here is an [article](./CommunitySharings/HpoComparision.md) about the comparison of different Tuners on several problems. Note: Click the **Tuner's name** to get the Tuner's installation requirements, suggested scenario and using example. The link for a detailed description of the algorithm is at the end of the suggested scenario of each tuner. Here is an [article](../CommunitySharings/HpoComparision.md) about the comparison of different Tuners on several problems.
Currently we support the following algorithms: Currently we support the following algorithms:
...@@ -11,28 +11,27 @@ Currently we support the following algorithms: ...@@ -11,28 +11,27 @@ Currently we support the following algorithms:
|[__TPE__](#TPE)|The Tree-structured Parzen Estimator (TPE) is a sequential model-based optimization (SMBO) approach. SMBO methods sequentially construct models to approximate the performance of hyperparameters based on historical measurements, and then subsequently choose new hyperparameters to test based on this model. [Reference Paper](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)| |[__TPE__](#TPE)|The Tree-structured Parzen Estimator (TPE) is a sequential model-based optimization (SMBO) approach. SMBO methods sequentially construct models to approximate the performance of hyperparameters based on historical measurements, and then subsequently choose new hyperparameters to test based on this model. [Reference Paper](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)|
|[__Random Search__](#Random)|In Random Search for Hyper-Parameter Optimization show that Random Search might be surprisingly simple and effective. We suggest that we could use Random Search as the baseline when we have no knowledge about the prior distribution of hyper-parameters. [Reference Paper](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf)| |[__Random Search__](#Random)|In Random Search for Hyper-Parameter Optimization show that Random Search might be surprisingly simple and effective. We suggest that we could use Random Search as the baseline when we have no knowledge about the prior distribution of hyper-parameters. [Reference Paper](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf)|
|[__Anneal__](#Anneal)|This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive.| |[__Anneal__](#Anneal)|This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive.|
|[__Naive Evolution__](#Evolution)|Naive Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naive Evolution requires many trials to works, but it's very simple and easy to expand new features. [Reference paper](https://arxiv.org/pdf/1703.01041.pdf)| |[__Naïve Evolution__](#Evolution)|Naïve Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to works, but it's very simple and easy to expand new features. [Reference paper](https://arxiv.org/pdf/1703.01041.pdf)|
|[__SMAC__](#SMAC)|SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by nni is a wrapper on the SMAC3 Github repo. Notice, SMAC need to be installed by `nnictl package` command. [Reference Paper,](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) [Github Repo](https://github.com/automl/SMAC3)| |[__SMAC__](#SMAC)|SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo. Notice, SMAC need to be installed by `nnictl package` command. [Reference Paper,](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) [GitHub Repo](https://github.com/automl/SMAC3)|
|[__Batch tuner__](#Batch)|Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec.| |[__Batch tuner__](#Batch)|Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec.|
|[__Grid Search__](#GridSearch)|Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are choice, quniform, qloguniform. The number q in quniform and qloguniform has special meaning (different from the spec in search space spec). It means the number of values that will be sampled evenly from the range low and high.| |[__Grid Search__](#GridSearch)|Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are choice, quniform, qloguniform. The number q in quniform and qloguniform has special meaning (different from the spec in search space spec). It means the number of values that will be sampled evenly from the range low and high.|
|[__Hyperband__](#Hyperband)|Hyperband tries to use the limited resource to explore as many configurations as possible, and finds out the promising ones to get the final result. The basic idea is generating many configurations and to run them for the small number of trial budget to find out promising one, then further training those promising ones to select several more promising one.[Reference Paper](https://arxiv.org/pdf/1603.06560.pdf)| |[__Hyperband__](#Hyperband)|Hyperband tries to use the limited resource to explore as many configurations as possible, and finds out the promising ones to get the final result. The basic idea is generating many configurations and to run them for the small number of trial budget to find out promising one, then further training those promising ones to select several more promising one.[Reference Paper](https://arxiv.org/pdf/1603.06560.pdf)|
|[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)| |[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)|
|[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)| |[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)|
|[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)| |[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Bayesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)|
|[__GP Tuner__](#GPTuner)|Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. [Reference Paper](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf), [Github Repo](https://github.com/fmfn/BayesianOptimization)|
<br> ## Usage of Built-in Tuners
## Usage of Builtin Tuners
Use builtin tuner provided by NNI SDK requires to declare the **builtinTunerName** and **classArgs** in `config.yml` file. In this part, we will introduce the detailed usage about the suggested scenarios, classArg requirements and example for each tuner. Use built-in tuner provided by NNI SDK requires to declare the **builtinTunerName** and **classArgs** in `config.yml` file. In this part, we will introduce the detailed usage about the suggested scenarios, classArg requirements and example for each tuner.
Note: Please follow the format when you write your `config.yml` file. Some builtin tuner need to be installed by `nnictl package`, like SMAC. Note: Please follow the format when you write your `config.yml` file. Some built-in tuner need to be installed by `nnictl package`, like SMAC.
<a name="TPE"></a> <a name="TPE"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `TPE` ![](https://placehold.it/15/1589F0/000000?text=+) `TPE`
> Builtin Tuner Name: **TPE** > Built-in Tuner Name: **TPE**
**Suggested scenario** **Suggested scenario**
...@@ -59,7 +58,7 @@ tuner: ...@@ -59,7 +58,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Random Search` ![](https://placehold.it/15/1589F0/000000?text=+) `Random Search`
> Builtin Tuner Name: **Random** > Built-in Tuner Name: **Random**
**Suggested scenario** **Suggested scenario**
...@@ -83,7 +82,7 @@ tuner: ...@@ -83,7 +82,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Anneal` ![](https://placehold.it/15/1589F0/000000?text=+) `Anneal`
> Builtin Tuner Name: **Anneal** > Built-in Tuner Name: **Anneal**
**Suggested scenario** **Suggested scenario**
...@@ -108,9 +107,9 @@ tuner: ...@@ -108,9 +107,9 @@ tuner:
<a name="Evolution"></a> <a name="Evolution"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `Naive Evolution` ![](https://placehold.it/15/1589F0/000000?text=+) `Naïve Evolution`
> Builtin Tuner Name: **Evolution** > Built-in Tuner Name: **Evolution**
**Suggested scenario** **Suggested scenario**
...@@ -133,9 +132,9 @@ tuner: ...@@ -133,9 +132,9 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `SMAC` ![](https://placehold.it/15/1589F0/000000?text=+) `SMAC`
> Builtin Tuner Name: **SMAC** > Built-in Tuner Name: **SMAC**
**Please note that SMAC doesn't support running on windows currently. The specific reason can be referred to this [github issue](https://github.com/automl/SMAC3/issues/483).** **Please note that SMAC doesn't support running on windows currently. The specific reason can be referred to this [GitHub issue](https://github.com/automl/SMAC3/issues/483).**
**Installation** **Installation**
...@@ -169,7 +168,7 @@ tuner: ...@@ -169,7 +168,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Batch Tuner` ![](https://placehold.it/15/1589F0/000000?text=+) `Batch Tuner`
> Builtin Tuner Name: BatchTuner > Built-in Tuner Name: BatchTuner
**Suggested scenario** **Suggested scenario**
...@@ -208,11 +207,11 @@ The search space file including the high-level key `combine_params`. The type of ...@@ -208,11 +207,11 @@ The search space file including the high-level key `combine_params`. The type of
![](https://placehold.it/15/1589F0/000000?text=+) `Grid Search` ![](https://placehold.it/15/1589F0/000000?text=+) `Grid Search`
> Builtin Tuner Name: **Grid Search** > Built-in Tuner Name: **Grid Search**
**Suggested scenario** **Suggested scenario**
Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](./SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.** Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](../Tutorial/SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.**
It is suggested when search space is small, it is feasible to exhaustively sweeping the whole search space. [Detailed Description](./GridsearchTuner.md) It is suggested when search space is small, it is feasible to exhaustively sweeping the whole search space. [Detailed Description](./GridsearchTuner.md)
...@@ -230,7 +229,7 @@ tuner: ...@@ -230,7 +229,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Hyperband` ![](https://placehold.it/15/1589F0/000000?text=+) `Hyperband`
> Builtin Advisor Name: **Hyperband** > Built-in Advisor Name: **Hyperband**
**Suggested scenario** **Suggested scenario**
...@@ -260,11 +259,11 @@ advisor: ...@@ -260,11 +259,11 @@ advisor:
![](https://placehold.it/15/1589F0/000000?text=+) `Network Morphism` ![](https://placehold.it/15/1589F0/000000?text=+) `Network Morphism`
> Builtin Tuner Name: **NetworkMorphism** > Built-in Tuner Name: **NetworkMorphism**
**Installation** **Installation**
NetworkMorphism requires [pyTorch](https://pytorch.org/get-started/locally), so users should install it first. NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally) and [Keras](https://keras.io/#installation), so users should install them first. The corresponding requirements file is [here](https://github.com/microsoft/nni/blob/master/examples/trials/network_morphism/requirements.txt).
**Suggested scenario** **Suggested scenario**
...@@ -298,13 +297,13 @@ tuner: ...@@ -298,13 +297,13 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Metis Tuner` ![](https://placehold.it/15/1589F0/000000?text=+) `Metis Tuner`
> Builtin Tuner Name: **MetisTuner** > Built-in Tuner Name: **MetisTuner**
Note that the only acceptable types of search space are `choice`, `quniform`, `uniform` and `randint`. Note that the only acceptable types of search space are `choice`, `quniform`, `uniform` and `randint`.
**Suggested scenario** **Suggested scenario**
Similar to TPE and SMAC, Metis is a black-box tuner. If your system takes a long time to finish each trial, Metis is more favorable than other approaches such as random search. Furthermore, Metis provides guidance on the subsequent trial. Here is an [example](https://github.com/Microsoft/nni/tree/master/examples/trials/auto-gbdt/search_space_metis.json) about the use of Metis. User only need to send the final result like `accuracy` to tuner, by calling the nni SDK. [Detailed Description](./MetisTuner.md) Similar to TPE and SMAC, Metis is a black-box tuner. If your system takes a long time to finish each trial, Metis is more favorable than other approaches such as random search. Furthermore, Metis provides guidance on the subsequent trial. Here is an [example](https://github.com/Microsoft/nni/tree/master/examples/trials/auto-gbdt/search_space_metis.json) about the use of Metis. User only need to send the final result like `accuracy` to tuner, by calling the NNI SDK. [Detailed Description](./MetisTuner.md)
**Requirement of classArg** **Requirement of classArg**
...@@ -326,7 +325,7 @@ tuner: ...@@ -326,7 +325,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `BOHB Advisor` ![](https://placehold.it/15/1589F0/000000?text=+) `BOHB Advisor`
> Builtin Tuner Name: **BOHB** > Built-in Tuner Name: **BOHB**
**Installation** **Installation**
...@@ -338,7 +337,7 @@ nnictl package install --name=BOHB ...@@ -338,7 +337,7 @@ nnictl package install --name=BOHB
**Suggested scenario** **Suggested scenario**
Similar to Hyperband, it is suggested when you have limited computation resource but have relatively large search space. It performs well in the scenario that intermediate result (e.g., accuracy) can reflect good or bad of final result (e.g., accuracy) to some extent. In this case, it may converges to a better configuration due to bayesian optimization usage. [Detailed Description](./BohbAdvisor.md) Similar to Hyperband, it is suggested when you have limited computation resource but have relatively large search space. It performs well in the scenario that intermediate result (e.g., accuracy) can reflect good or bad of final result (e.g., accuracy) to some extent. In this case, it may converges to a better configuration due to Bayesian optimization usage. [Detailed Description](./BohbAdvisor.md)
**Requirement of classArg** **Requirement of classArg**
...@@ -357,7 +356,7 @@ Similar to Hyperband, it is suggested when you have limited computation resource ...@@ -357,7 +356,7 @@ Similar to Hyperband, it is suggested when you have limited computation resource
**Usage example** **Usage example**
```yml ```yaml
advisor: advisor:
builtinAdvisorName: BOHB builtinAdvisorName: BOHB
classArgs: classArgs:
...@@ -366,3 +365,45 @@ advisor: ...@@ -366,3 +365,45 @@ advisor:
max_budget: 27 max_budget: 27
eta: 3 eta: 3
``` ```
<a name="GPTuner"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `GP Tuner`
> Built-in Tuner Name: **GPTuner**
Note that the only acceptable types of search space are `choice`, `randint`, `uniform`, `quniform`, `loguniform`, `qloguniform`.
**Suggested scenario**
As a strategy in Sequential Model-based Global Optimization(SMBO) algorithm, GP Tuner uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore GP Tuner is most adequate for situations where the function to be optimized is a very expensive endeavor. GP can be used when the computation resource is limited. While GP Tuner has a computational cost that grows at *O(N^3)* due to the requirement of inverting the Gram matrix, so it's not suitable when lots of trials are needed. [Detailed Description](./GPTuner.md)
**Requirement of classArg**
* **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.
* **utility** (*'ei', 'ucb' or 'poi', optional, default = 'ei'*) - The kind of utility function(acquisition function). 'ei', 'ucb' and 'poi' corresponds to 'Expected Improvement', 'Upper Confidence Bound' and 'Probability of Improvement' respectively.
* **kappa** (*float, optional, default = 5*) - Used by utility function 'ucb'. The bigger `kappa` is, the more the tuner will be exploratory.
* **xi** (*float, optional, default = 0*) - Used by utility function 'ei' and 'poi'. The bigger `xi` is, the more the tuner will be exploratory.
* **nu** (*float, optional, default = 2.5*) - Used to specify Matern kernel. The smaller nu, the less smooth the approximated function is.
* **alpha** (*float, optional, default = 1e-6*) - Used to specify Gaussian Process Regressor. Larger values correspond to increased noise level in the observations.
* **cold_start_num** (*int, optional, default = 10*) - Number of random exploration to perform before Gaussian Process. Random exploration can help by diversifying the exploration space.
* **selection_num_warm_up** (*int, optional, default = 1e5*) - Number of random points to evaluate for getting the point which maximizes the acquisition function.
* **selection_num_starting_points** (*int, optional, default = 250*) - Number of times to run L-BFGS-B from a random starting point after the warmup.
**Usage example**
```yaml
# config.yml
tuner:
builtinTunerName: GPTuner
classArgs:
optimize_mode: maximize
utility: 'ei'
kappa: 5.0
xi: 0.0
nu: 2.5
alpha: 1e-6
cold_start_num: 10
selection_num_warm_up: 100000
selection_num_starting_points: 250
```
GP Tuner on NNI
===
## GP Tuner
Bayesian optimization works by constructing a posterior distribution of functions (Gaussian Process here) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not.
GP Tuner is designed to minimize/maximize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor.
This optimization approach is described in Section 3 of [Algorithms for Hyper-Parameter Optimization](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf).
...@@ -3,4 +3,4 @@ Grid Search on NNI ...@@ -3,4 +3,4 @@ Grid Search on NNI
## Grid Search ## Grid Search
Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.** Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](../Tutorial/SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.**
\ No newline at end of file \ No newline at end of file
...@@ -5,4 +5,4 @@ SMAC Tuner on NNI ...@@ -5,4 +5,4 @@ SMAC Tuner on NNI
[SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by nni is a wrapper on [the SMAC3 github repo](https://github.com/automl/SMAC3). [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by nni is a wrapper on [the SMAC3 github repo](https://github.com/automl/SMAC3).
Note that SMAC on nni only supports a subset of the types in [search space spec](SearchSpaceSpec.md), including `choice`, `randint`, `uniform`, `loguniform`, `quniform(q=1)`. Note that SMAC on nni only supports a subset of the types in [search space spec](../Tutorial/SearchSpaceSpec.md), including `choice`, `randint`, `uniform`, `loguniform`, `quniform(q=1)`.
\ No newline at end of file \ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment