@@ -50,7 +50,7 @@ Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial con
* Required key. Should be positive number based on your trial program's memory requirement
* image
* Required key. In pai mode, your trial program will be scheduled by OpenPAI to run in [Docker container](https://www.docker.com/). This key is used to specify the Docker image used to create the container in which your trial will run.
* We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/Dockerfile.build.base). You can either use this image directly in your config file, or build your own image based on it.
* We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
* dataDir
* Optional key. It specifies the HDFS data direcotry for trial to download data. The format should be something like hdfs://{your HDFS host}:9000/{your data directory}
to start the experiment in pai mode. NNI will create OpenPAI job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
You can see jobs created by NNI in the OpenPAI cluster's web portal, like:


Notice: In pai mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic.
Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
Expand a trial information in trial list view, click the logPath link like:


And you will be redirected to HDFS web portal to browse the output files of that trial in HDFS:


You can see there're three fils in output folder: stderr, stdout, and trial.log
If you also want to save trial's other output into HDFS, like model files, you can use environment variable `NNI_OUTPUT_DIR` in your trial code to save your own output files, and NNI SDK will copy all the files in `NNI_OUTPUT_DIR` from trial's container to HDFS.
Any problems when using NNI in pai mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni).
Any problems when using NNI in pai mode, please create issues on [NNI github repo](https://github.com/Microsoft/nni).
Information about this experiment will be shown in the WebUI, including the experiment trial profile and search space message. NNI also support `download these information and parameters` through the **Download** button. You can download the experiment result anytime in the middle for the running or at the end of the execution, etc.


Top 10 trials will be listed in the Overview page, you can browse all the trials in "Trials Detail" page.


#### View trials detail page
Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message.


Click the tab "Hyper Parameter" to see the parallel graph.
* You can select the percentage to see top trials.
* Choose two axis to swap its positions


Click the tab "Trial Duration" to see the bar graph.


Below is the status of the all trials. Specifically:
...
...
@@ -213,11 +213,11 @@ Below is the status of the all trials. Specifically:
* Kill: you can kill a job that status is running.
Modify `nni/examples/trials/ga_squad/config.yml`, here is the default configuration:
```yaml
authorName:default
experimentName:example_ga_squad
trialConcurrency:1
maxExecDuration:1h
maxTrialNum:1
#choice: local, remote
trainingServicePlatform:local
#choice: true, false
useAnnotation:false
tuner:
codeDir:~/nni/examples/tuners/ga_customer_tuner
classFileName:customer_tuner.py
className:CustomerTuner
classArgs:
optimize_mode:maximize
trial:
command:python3 trial.py
codeDir:~/nni/examples/trials/ga_squad
gpuNum:0
```
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
Due to the memory limitation of upload, we only upload the source code and complete the data download and training on OpenPAI. This experiment requires sufficient memory that `memoryMB >= 32G`, and the training may last for several hours.
### 3.1 Update configuration
Modify `nni/examples/trials/ga_squad/config_pai.yml`, here is the default configuration:
#The hdfs directory to store data on OpenPAI, format 'hdfs://host:port/directory'
dataDir:hdfs://10.10.10.10:9000/username/nni
#The hdfs directory to store output data generated by nni, format 'hdfs://host:port/directory'
outputDir:hdfs://10.10.10.10:9000/username/nni
paiConfig:
#The username to login OpenPAI
userName:username
#The password to login OpenPAI
passWord:password
#The host of restful server of OpenPAI
host:10.10.10.10
```
Please change the default value to your personal account and machine information. Including `nniManagerIp`, `dataDir`, `outputDir`, `userName`, `passWord` and `host`.
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
`trialConcurrency` is the number of trials running concurrently, which is the number of GPUs you want to use, if you are setting `gpuNum` to 1.
As we can see, this function is actually a compiler, that converts the internal model DAG configuration (which will be introduced in the `Model configuration format` section) `graph`, to a Tensorflow computation graph.
```python
topology=graph.is_topology()
```
performs topological sorting on the internal graph representation, and the code inside the loop:
```python
for_,topo_iinenumerate(topology):
```
performs actually conversion that maps each layer to a part in Tensorflow computation graph.
### 4.3 The tuner
The tuner is much more simple than the trial. They actually share the same `graph.py`. Besides, the tuner has a `customer_tuner.py`, the most important class in which is `CustomerTuner`:
```python
classCustomerTuner(Tuner):
# ......
defgenerate_parameters(self,parameter_id):
"""Returns a set of trial graph config, as a serializable object.
parameter_id : int
"""
iflen(self.population)<=0:
logger.debug("the len of poplution lower than zero.")
controls the mutation process. It will always take two random individuals in the population, only keeping and mutating the one with better result.
### 4.4 Model configuration format
Here is an example of the model configuration, which is passed from the tuner to the trial in the architecture search procedure.
```json
{
"max_layer_num":50,
"layers":[
{
"input_size":0,
"type":3,
"output_size":1,
"input":[],
"size":"x",
"output":[4,5],
"is_delete":false
},
{
"input_size":0,
"type":3,
"output_size":1,
"input":[],
"size":"y",
"output":[4,5],
"is_delete":false
},
{
"input_size":1,
"type":4,
"output_size":0,
"input":[6],
"size":"x",
"output":[],
"is_delete":false
},
{
"input_size":1,
"type":4,
"output_size":0,
"input":[5],
"size":"y",
"output":[],
"is_delete":false
},
{"Comment":"More layers will be here for actual graphs."}
]
}
```
Every model configuration will have a "layers" section, which is a JSON list of layer definitions. The definition of each layer is also a JSON object, where:
*`type` is the type of the layer. 0, 1, 2, 3, 4 corresponds to attention, self-attention, RNN, input and output layer respectively.
*`size` is the length of the output. "x", "y" correspond to document length / question length, respectively.
*`input_size` is the number of inputs the layer has.
*`input` is the indices of layers taken as input of this layer.
*`output` is the indices of layers use this layer's output as their input.
*`is_delete` means whether the layer is still available.
# Automatic Model Architecture Search for Reading Comprehension
This example shows us how to use Genetic Algorithm to find good model architectures for Reading Comprehension.
## 1. Search Space
Since attention and recurrent neural network (RNN) have been proven effective in Reading Comprehension.
We conclude the search space as follow:
1. IDENTITY (Effectively means keep training).
2. INSERT-RNN-LAYER (Inserts a LSTM. Comparing the performance of GRU and LSTM in our experiment, we decided to use LSTM here.)
3. REMOVE-RNN-LAYER
4. INSERT-ATTENTION-LAYER(Inserts an attention layer.)
5. REMOVE-ATTENTION-LAYER
6. ADD-SKIP (Identity between random layers).
7. REMOVE-SKIP (Removes random skip).

### New version
Also we have another version which time cost is less and performance is better. We will release soon.
## 2. How to run this example in local?
### 2.1 Use downloading script to download data
Execute the following command to download needed files
using the downloading script:
```bash
chmod +x ./download.sh
./download.sh
```
Or Download manually
1. download "dev-v1.1.json" and "train-v1.1.json" in https://rajpurkar.github.io/SQuAD-explorer/
Modify `nni/examples/trials/ga_squad/config.yml`, here is the default configuration:
```yaml
authorName:default
experimentName:example_ga_squad
trialConcurrency:1
maxExecDuration:1h
maxTrialNum:1
#choice: local, remote
trainingServicePlatform:local
#choice: true, false
useAnnotation:false
tuner:
codeDir:~/nni/examples/tuners/ga_customer_tuner
classFileName:customer_tuner.py
className:CustomerTuner
classArgs:
optimize_mode:maximize
trial:
command:python3 trial.py
codeDir:~/nni/examples/trials/ga_squad
gpuNum:0
```
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
Due to the memory limitation of upload, we only upload the source code and complete the data download and training on OpenPAI. This experiment requires sufficient memory that `memoryMB >= 32G`, and the training may last for several hours.
### 3.1 Update configuration
Modify `nni/examples/trials/ga_squad/config_pai.yml`, here is the default configuration:
#The hdfs directory to store data on OpenPAI, format 'hdfs://host:port/directory'
dataDir:hdfs://10.10.10.10:9000/username/nni
#The hdfs directory to store output data generated by nni, format 'hdfs://host:port/directory'
outputDir:hdfs://10.10.10.10:9000/username/nni
paiConfig:
#The username to login OpenPAI
userName:username
#The password to login OpenPAI
passWord:password
#The host of restful server of OpenPAI
host:10.10.10.10
```
Please change the default value to your personal account and machine information. Including `nniManagerIp`, `dataDir`, `outputDir`, `userName`, `passWord` and `host`.
In the "trial" part, if you want to use GPU to perform the architecture search, change `gpuNum` from `0` to `1`. You need to increase the `maxTrialNum` and `maxExecDuration`, according to how long you want to wait for the search result.
`trialConcurrency` is the number of trials running concurrently, which is the number of GPUs you want to use, if you are setting `gpuNum` to 1.
As we can see, this function is actually a compiler, that converts the internal model DAG configuration (which will be introduced in the `Model configuration format` section) `graph`, to a Tensorflow computation graph.
```python
topology=graph.is_topology()
```
performs topological sorting on the internal graph representation, and the code inside the loop:
```python
for_,topo_iinenumerate(topology):
```
performs actually conversion that maps each layer to a part in Tensorflow computation graph.
### 4.3 The tuner
The tuner is much more simple than the trial. They actually share the same `graph.py`. Besides, the tuner has a `customer_tuner.py`, the most important class in which is `CustomerTuner`:
```python
classCustomerTuner(Tuner):
# ......
defgenerate_parameters(self,parameter_id):
"""Returns a set of trial graph config, as a serializable object.
parameter_id : int
"""
iflen(self.population)<=0:
logger.debug("the len of poplution lower than zero.")
controls the mutation process. It will always take two random individuals in the population, only keeping and mutating the one with better result.
### 4.4 Model configuration format
Here is an example of the model configuration, which is passed from the tuner to the trial in the architecture search procedure.
```json
{
"max_layer_num":50,
"layers":[
{
"input_size":0,
"type":3,
"output_size":1,
"input":[],
"size":"x",
"output":[4,5],
"is_delete":false
},
{
"input_size":0,
"type":3,
"output_size":1,
"input":[],
"size":"y",
"output":[4,5],
"is_delete":false
},
{
"input_size":1,
"type":4,
"output_size":0,
"input":[6],
"size":"x",
"output":[],
"is_delete":false
},
{
"input_size":1,
"type":4,
"output_size":0,
"input":[5],
"size":"y",
"output":[],
"is_delete":false
},
{"Comment":"More layers will be here for actual graphs."}
]
}
```
Every model configuration will have a "layers" section, which is a JSON list of layer definitions. The definition of each layer is also a JSON object, where:
*`type` is the type of the layer. 0, 1, 2, 3, 4 corresponds to attention, self-attention, RNN, input and output layer respectively.
*`size` is the length of the output. "x", "y" correspond to document length / question length, respectively.
*`input_size` is the number of inputs the layer has.
*`input` is the indices of layers taken as input of this layer.
*`output` is the indices of layers use this layer's output as their input.
*`is_delete` means whether the layer is still available.
Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion as other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.
Gradient boosting decision tree has many popular implementations, such as [lightgbm](https://github.com/Microsoft/LightGBM), [xgboost](https://github.com/dmlc/xgboost), and [catboost](https://github.com/catboost/catboost), etc. GBDT is a great tool for solving the problem of traditional machine learning problem. Since GBDT is a robust algorithm, it could use in many domains. The better hyper-parameters for GBDT, the better performance you could achieve.
NNI is a great platform for tuning hyper-parameters, you could try various builtin search algorithm in nni and run multiple trials concurrently.
## 1. Search Space in GBDT
There are many hyper-parameters in GBDT, but what kind of parameters will affect the performance or speed? Based on some practical experience, some suggestion here(Take lightgbm as example):
> * For better accuracy
*`learning_rate`. The range of `learning rate` could be [0.001, 0.9].
*`num_leaves`. `num_leaves` is related to `max_depth`, you don't have to tune both of them.
*`bagging_freq`. `bagging_freq` could be [1, 2, 4, 8, 10]
*`num_iterations`. May larger if underfitting.
> * For speed up
*`bagging_fraction`. The range of `bagging_fraction` could be [0.7, 1.0].
*`feature_fraction`. The range of `feature_fraction` could be [0.6, 1.0].
*`max_bin`.
> * To avoid overfitting
*`min_data_in_leaf`. This depends on your dataset.
*`min_sum_hessian_in_leaf`. This depend on your dataset.
*`lambda_l1` and `lambda_l2`.
*`min_gain_to_split`.
*`num_leaves`.
Reference link:
[lightgbm](https://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html) and [autoxgoboost](https://github.com/ja-thomas/autoxgboost/blob/master/poster_2018.pdf)
## 2. Task description
Now we come back to our example "auto-gbdt" which run in lightgbm and nni. The data including [train data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train) and [test data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train).
Given the features and label in train data, we train a GBDT regression model and use it to predict.
If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `bagging_freq`, you could write a [search_space.json](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/search_space.json) as follow:
Gradient boosting is a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion as other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.
Gradient boosting decision tree has many popular implementations, such as [lightgbm](https://github.com/Microsoft/LightGBM), [xgboost](https://github.com/dmlc/xgboost), and [catboost](https://github.com/catboost/catboost), etc. GBDT is a great tool for solving the problem of traditional machine learning problem. Since GBDT is a robust algorithm, it could use in many domains. The better hyper-parameters for GBDT, the better performance you could achieve.
NNI is a great platform for tuning hyper-parameters, you could try various builtin search algorithm in nni and run multiple trials concurrently.
## 1. Search Space in GBDT
There are many hyper-parameters in GBDT, but what kind of parameters will affect the performance or speed? Based on some practical experience, some suggestion here(Take lightgbm as example):
> * For better accuracy
*`learning_rate`. The range of `learning rate` could be [0.001, 0.9].
*`num_leaves`. `num_leaves` is related to `max_depth`, you don't have to tune both of them.
*`bagging_freq`. `bagging_freq` could be [1, 2, 4, 8, 10]
*`num_iterations`. May larger if underfitting.
> * For speed up
*`bagging_fraction`. The range of `bagging_fraction` could be [0.7, 1.0].
*`feature_fraction`. The range of `feature_fraction` could be [0.6, 1.0].
*`max_bin`.
> * To avoid overfitting
*`min_data_in_leaf`. This depends on your dataset.
*`min_sum_hessian_in_leaf`. This depend on your dataset.
*`lambda_l1` and `lambda_l2`.
*`min_gain_to_split`.
*`num_leaves`.
Reference link:
[lightgbm](https://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html) and [autoxgoboost](https://github.com/ja-thomas/autoxgboost/blob/master/poster_2018.pdf)
## 2. Task description
Now we come back to our example "auto-gbdt" which run in lightgbm and nni. The data including [train data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train) and [test data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train).
Given the features and label in train data, we train a GBDT regression model and use it to predict.
If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `bagging_freq`, you could write a [search_space.json](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/search_space.json) as follow: