@@ -50,7 +50,7 @@ Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial con
...
@@ -50,7 +50,7 @@ Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial con
* Required key. Should be positive number based on your trial program's memory requirement
* Required key. Should be positive number based on your trial program's memory requirement
* image
* image
* Required key. In pai mode, your trial program will be scheduled by OpenPAI to run in [Docker container](https://www.docker.com/). This key is used to specify the Docker image used to create the container in which your trial will run.
* Required key. In pai mode, your trial program will be scheduled by OpenPAI to run in [Docker container](https://www.docker.com/). This key is used to specify the Docker image used to create the container in which your trial will run.
* We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/Dockerfile.build.base). You can either use this image directly in your config file, or build your own image based on it.
* We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
* dataDir
* dataDir
* Optional key. It specifies the HDFS data direcotry for trial to download data. The format should be something like hdfs://{your HDFS host}:9000/{your data directory}
* Optional key. It specifies the HDFS data direcotry for trial to download data. The format should be something like hdfs://{your HDFS host}:9000/{your data directory}
to start the experiment in pai mode. NNI will create OpenPAI job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
to start the experiment in pai mode. NNI will create OpenPAI job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
You can see jobs created by NNI in the OpenPAI cluster's web portal, like:
You can see jobs created by NNI in the OpenPAI cluster's web portal, like:


Notice: In pai mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic.
Notice: In pai mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic.
Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
Expand a trial information in trial list view, click the logPath link like:
Expand a trial information in trial list view, click the logPath link like:


And you will be redirected to HDFS web portal to browse the output files of that trial in HDFS:
And you will be redirected to HDFS web portal to browse the output files of that trial in HDFS:


You can see there're three fils in output folder: stderr, stdout, and trial.log
You can see there're three fils in output folder: stderr, stdout, and trial.log
If you also want to save trial's other output into HDFS, like model files, you can use environment variable `NNI_OUTPUT_DIR` in your trial code to save your own output files, and NNI SDK will copy all the files in `NNI_OUTPUT_DIR` from trial's container to HDFS.
If you also want to save trial's other output into HDFS, like model files, you can use environment variable `NNI_OUTPUT_DIR` in your trial code to save your own output files, and NNI SDK will copy all the files in `NNI_OUTPUT_DIR` from trial's container to HDFS.
Any problems when using NNI in pai mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni).
Any problems when using NNI in pai mode, please create issues on [NNI github repo](https://github.com/Microsoft/nni).
Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also support `download these information and parameters` through the **Download** button. You can download the experiment result anytime in the middle for the running or at the end of the execution, etc.
Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also support `download these information and parameters` through the **Download** button. You can download the experiment result anytime in the middle for the running or at the end of the execution, etc.


Top 10 trials will be listed in the Overview page, you can browse all the trials in "Trials Detail" page.
Top 10 trials will be listed in the Overview page, you can browse all the trials in "Trials Detail" page.


#### View trials detail page
#### View trials detail page
Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message.
Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message.


Click the tab "Hyper Parameter" to see the parallel graph.
Click the tab "Hyper Parameter" to see the parallel graph.
* You can select the percentage to see top trials.
* You can select the percentage to see top trials.
* Choose two axis to swap its positions
* Choose two axis to swap its positions


Click the tab "Trial Duration" to see the bar graph.
Click the tab "Trial Duration" to see the bar graph.


Below is the status of the all trials. Specifically:
Below is the status of the all trials. Specifically:
...
@@ -213,11 +213,11 @@ Below is the status of the all trials. Specifically:
...
@@ -213,11 +213,11 @@ Below is the status of the all trials. Specifically:
* Kill: you can kill a job that status is running.
* Kill: you can kill a job that status is running.
Modify `nni/examples/trials/ga_squad/config.yml`, here is the default configuration:
Modify `nni/examples/trials/ga_squad/config.yml`, here is the default configuration:
```yaml
```yaml
authorName:default
authorName:default
experimentName:example_ga_squad
experimentName:example_ga_squad
trialConcurrency:1
trialConcurrency:1
maxExecDuration:1h
maxExecDuration:1h
maxTrialNum:1
maxTrialNum:1
#choice: local, remote
#choice: local, remote
trainingServicePlatform:local
trainingServicePlatform:local
#choice: true, false
#choice: true, false
useAnnotation:false
useAnnotation:false
tuner:
tuner:
codeDir:~/nni/examples/tuners/ga_customer_tuner
codeDir:~/nni/examples/tuners/ga_customer_tuner
classFileName:customer_tuner.py
classFileName:customer_tuner.py
className:CustomerTuner
className:CustomerTuner
classArgs:
classArgs:
optimize_mode:maximize
optimize_mode:maximize
trial:
trial:
command:python3 trial.py
command:python3 trial.py
codeDir:~/nni/examples/trials/ga_squad
codeDir:~/nni/examples/trials/ga_squad
gpuNum:0
gpuNum:0
```
```
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
Due to the memory limitation of upload, we only upload the source code and complete the data download and training on OpenPAI. This experiment requires sufficient memory that `memoryMB >= 32G`, and the training may last for several hours.
Due to the memory limitation of upload, we only upload the source code and complete the data download and training on OpenPAI. This experiment requires sufficient memory that `memoryMB >= 32G`, and the training may last for several hours.
### 3.1 Update configuration
### 3.1 Update configuration
Modify `nni/examples/trials/ga_squad/config_pai.yml`, here is the default configuration:
Modify `nni/examples/trials/ga_squad/config_pai.yml`, here is the default configuration:
#The hdfs directory to store data on OpenPAI, format 'hdfs://host:port/directory'
#The hdfs directory to store data on OpenPAI, format 'hdfs://host:port/directory'
dataDir:hdfs://10.10.10.10:9000/username/nni
dataDir:hdfs://10.10.10.10:9000/username/nni
#The hdfs directory to store output data generated by nni, format 'hdfs://host:port/directory'
#The hdfs directory to store output data generated by nni, format 'hdfs://host:port/directory'
outputDir:hdfs://10.10.10.10:9000/username/nni
outputDir:hdfs://10.10.10.10:9000/username/nni
paiConfig:
paiConfig:
#The username to login OpenPAI
#The username to login OpenPAI
userName:username
userName:username
#The password to login OpenPAI
#The password to login OpenPAI
passWord:password
passWord:password
#The host of restful server of OpenPAI
#The host of restful server of OpenPAI
host:10.10.10.10
host:10.10.10.10
```
```
Please change the default value to your personal account and machine information. Including `nniManagerIp`, `dataDir`, `outputDir`, `userName`, `passWord` and `host`.
Please change the default value to your personal account and machine information. Including `nniManagerIp`, `dataDir`, `outputDir`, `userName`, `passWord` and `host`.
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
`trialConcurrency` is the number of trials running concurrently, which is the number of GPUs you want to use, if you are setting `gpuNum` to 1.
`trialConcurrency` is the number of trials running concurrently, which is the number of GPUs you want to use, if you are setting `gpuNum` to 1.
As we can see, this function is actually a compiler, that converts the internal model DAG configuration (which will be introduced in the `Model configuration format` section) `graph`, to a Tensorflow computation graph.
As we can see, this function is actually a compiler, that converts the internal model DAG configuration (which will be introduced in the `Model configuration format` section) `graph`, to a Tensorflow computation graph.
```python
```python
topology=graph.is_topology()
topology=graph.is_topology()
```
```
performs topological sorting on the internal graph representation, and the code inside the loop:
performs topological sorting on the internal graph representation, and the code inside the loop:
```python
```python
for_,topo_iinenumerate(topology):
for_,topo_iinenumerate(topology):
```
```
performs actually conversion that maps each layer to a part in Tensorflow computation graph.
performs actually conversion that maps each layer to a part in Tensorflow computation graph.
### 4.3 The tuner
### 4.3 The tuner
The tuner is much more simple than the trial. They actually share the same `graph.py`. Besides, the tuner has a `customer_tuner.py`, the most important class in which is `CustomerTuner`:
The tuner is much more simple than the trial. They actually share the same `graph.py`. Besides, the tuner has a `customer_tuner.py`, the most important class in which is `CustomerTuner`:
```python
```python
classCustomerTuner(Tuner):
classCustomerTuner(Tuner):
# ......
# ......
defgenerate_parameters(self,parameter_id):
defgenerate_parameters(self,parameter_id):
"""Returns a set of trial graph config, as a serializable object.
"""Returns a set of trial graph config, as a serializable object.
parameter_id : int
parameter_id : int
"""
"""
iflen(self.population)<=0:
iflen(self.population)<=0:
logger.debug("the len of poplution lower than zero.")
logger.debug("the len of poplution lower than zero.")
controls the mutation process. It will always take two random individuals in the population, only keeping and mutating the one with better result.
controls the mutation process. It will always take two random individuals in the population, only keeping and mutating the one with better result.
### 4.4 Model configuration format
### 4.4 Model configuration format
Here is an example of the model configuration, which is passed from the tuner to the trial in the architecture search procedure.
Here is an example of the model configuration, which is passed from the tuner to the trial in the architecture search procedure.
```json
```json
{
{
"max_layer_num":50,
"max_layer_num":50,
"layers":[
"layers":[
{
{
"input_size":0,
"input_size":0,
"type":3,
"type":3,
"output_size":1,
"output_size":1,
"input":[],
"input":[],
"size":"x",
"size":"x",
"output":[4,5],
"output":[4,5],
"is_delete":false
"is_delete":false
},
},
{
{
"input_size":0,
"input_size":0,
"type":3,
"type":3,
"output_size":1,
"output_size":1,
"input":[],
"input":[],
"size":"y",
"size":"y",
"output":[4,5],
"output":[4,5],
"is_delete":false
"is_delete":false
},
},
{
{
"input_size":1,
"input_size":1,
"type":4,
"type":4,
"output_size":0,
"output_size":0,
"input":[6],
"input":[6],
"size":"x",
"size":"x",
"output":[],
"output":[],
"is_delete":false
"is_delete":false
},
},
{
{
"input_size":1,
"input_size":1,
"type":4,
"type":4,
"output_size":0,
"output_size":0,
"input":[5],
"input":[5],
"size":"y",
"size":"y",
"output":[],
"output":[],
"is_delete":false
"is_delete":false
},
},
{"Comment":"More layers will be here for actual graphs."}
{"Comment":"More layers will be here for actual graphs."}
]
]
}
}
```
```
Every model configuration will have a "layers" section, which is a JSON list of layer definitions. The definition of each layer is also a JSON object, where:
Every model configuration will have a "layers" section, which is a JSON list of layer definitions. The definition of each layer is also a JSON object, where:
*`type` is the type of the layer. 0, 1, 2, 3, 4 corresponds to attention, self-attention, RNN, input and output layer respectively.
*`type` is the type of the layer. 0, 1, 2, 3, 4 corresponds to attention, self-attention, RNN, input and output layer respectively.
*`size` is the length of the output. "x", "y" correspond to document length / question length, respectively.
*`size` is the length of the output. "x", "y" correspond to document length / question length, respectively.
*`input_size` is the number of inputs the layer has.
*`input_size` is the number of inputs the layer has.
*`input` is the indices of layers taken as input of this layer.
*`input` is the indices of layers taken as input of this layer.
*`output` is the indices of layers use this layer's output as their input.
*`output` is the indices of layers use this layer's output as their input.
*`is_delete` means whether the layer is still available.
*`is_delete` means whether the layer is still available.
Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion as other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.
Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion as other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.
Gradient boosting decision tree has many popular implementations, such as [lightgbm](https://github.com/Microsoft/LightGBM), [xgboost](https://github.com/dmlc/xgboost), and [catboost](https://github.com/catboost/catboost), etc. GBDT is a great tool for solving the problem of traditional machine learning problem. Since GBDT is a robust algorithm, it could use in many domains. The better hyper-parameters for GBDT, the better performance you could achieve.
Gradient boosting decision tree has many popular implementations, such as [lightgbm](https://github.com/Microsoft/LightGBM), [xgboost](https://github.com/dmlc/xgboost), and [catboost](https://github.com/catboost/catboost), etc. GBDT is a great tool for solving the problem of traditional machine learning problem. Since GBDT is a robust algorithm, it could use in many domains. The better hyper-parameters for GBDT, the better performance you could achieve.
NNI is a great platform for tuning hyper-parameters, you could try various builtin search algorithm in nni and run multiple trials concurrently.
NNI is a great platform for tuning hyper-parameters, you could try various builtin search algorithm in nni and run multiple trials concurrently.
## 1. Search Space in GBDT
## 1. Search Space in GBDT
There are many hyper-parameters in GBDT, but what kind of parameters will affect the performance or speed? Based on some practical experience, some suggestion here(Take lightgbm as example):
There are many hyper-parameters in GBDT, but what kind of parameters will affect the performance or speed? Based on some practical experience, some suggestion here(Take lightgbm as example):
> * For better accuracy
> * For better accuracy
*`learning_rate`. The range of `learning rate` could be [0.001, 0.9].
*`learning_rate`. The range of `learning rate` could be [0.001, 0.9].
*`num_leaves`. `num_leaves` is related to `max_depth`, you don't have to tune both of them.
*`num_leaves`. `num_leaves` is related to `max_depth`, you don't have to tune both of them.
*`bagging_freq`. `bagging_freq` could be [1, 2, 4, 8, 10]
*`bagging_freq`. `bagging_freq` could be [1, 2, 4, 8, 10]
*`num_iterations`. May larger if underfitting.
*`num_iterations`. May larger if underfitting.
> * For speed up
> * For speed up
*`bagging_fraction`. The range of `bagging_fraction` could be [0.7, 1.0].
*`bagging_fraction`. The range of `bagging_fraction` could be [0.7, 1.0].
*`feature_fraction`. The range of `feature_fraction` could be [0.6, 1.0].
*`feature_fraction`. The range of `feature_fraction` could be [0.6, 1.0].
*`max_bin`.
*`max_bin`.
> * To avoid overfitting
> * To avoid overfitting
*`min_data_in_leaf`. This depends on your dataset.
*`min_data_in_leaf`. This depends on your dataset.
*`min_sum_hessian_in_leaf`. This depend on your dataset.
*`min_sum_hessian_in_leaf`. This depend on your dataset.
*`lambda_l1` and `lambda_l2`.
*`lambda_l1` and `lambda_l2`.
*`min_gain_to_split`.
*`min_gain_to_split`.
*`num_leaves`.
*`num_leaves`.
Reference link:
Reference link:
[lightgbm](https://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html) and [autoxgoboost](https://github.com/ja-thomas/autoxgboost/blob/master/poster_2018.pdf)
[lightgbm](https://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html) and [autoxgoboost](https://github.com/ja-thomas/autoxgboost/blob/master/poster_2018.pdf)
## 2. Task description
## 2. Task description
Now we come back to our example "auto-gbdt" which run in lightgbm and nni. The data including [train data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train) and [test data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train).
Now we come back to our example "auto-gbdt" which run in lightgbm and nni. The data including [train data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train) and [test data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train).
Given the features and label in train data, we train a GBDT regression model and use it to predict.
Given the features and label in train data, we train a GBDT regression model and use it to predict.
If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `bagging_freq`, you could write a [search_space.json](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/search_space.json) as follow:
If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `bagging_freq`, you could write a [search_space.json](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/search_space.json) as follow: