This is the Dockerfile of NNI project. It includes serveral popular deep learning frameworks and NNI. It is tested on `Ubuntu 16.04 LTS`:
```
CUDA 9.0
CuDNN 7.0
numpy 1.14.3
scipy 1.1.0
tensorflow-gpu 1.15.0
keras 2.1.6
torch 1.4.0
scikit-learn 0.23.2
pandas 0.23.4
lightgbm 2.2.2
nni
```
You can take this Dockerfile as a reference for your own customized Dockerfile.
## 2.How to build and run
__Use the following command from `nni/deployment/docker` to build docker image__
```
docker build -t nni/nni .
```
__Run the docker image__
* If does not use GPU in docker container, simply run the following command
```
docker run -it nni/nni
```
Note that if you want to use tensorflow, please uninstall tensorflow-gpu and install tensorflow in this docker container. Or modify `Dockerfile` to install tensorflow (without gpu) and build docker image.
* If use GPU in docker container, make sure you have installed [NVIDIA Container Runtime](https://github.com/NVIDIA/nvidia-docker), then run the following command
```
nvidia-docker run -it nni/nni
```
or
```
docker run --runtime=nvidia -it nni/nni
```
## 3.Directly retrieve the docker image
Use the following command to retrieve the NNI docker image from Docker Hub
To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code.
The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line.
In this way, users could either run the python code directly or launch NNI to tune hyper-parameter in this code, without changing any codes.
## Types of Annotation:
In NNI, there are mainly four types of annotation:
### 1. Annotate variables
`'''@nni.variable(sampling_algo, name)'''`
`@nni.variable` is used in NNI to annotate a variable.
**Arguments**
-**sampling_algo**: Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an `nni.` identification and a search space type specified in [SearchSpaceSpec](https://nni.readthedocs.io/en/latest/Tutorial/SearchSpaceSpec.html) such as `choice` or `uniform`.
-**name**: The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.
There are 10 types to express your search space as follows:
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
`@nni.function_choice` is used to choose one from several functions.
**Arguments**
- **functions**: Several functions that are waiting to be selected from. Note that it should be a complete function call with arguments. Such as `max_pool(hidden_layer, pool_size)`.
- **name**: The name of the function that will be replaced in the following assignment statement.
`@nni.report_intermediate_result` is used to report intermediate result, whose usage is the same as `nni.report_intermediate_result` in the doc of [Write a trial run on NNI](https://nni.readthedocs.io/en/latest/TrialExample/Trials.html)
### 4. Annotate final result
`'''@nni.report_final_result(metrics)'''`
`@nni.report_final_result` is used to report the final result of the current trial, whose usage is the same as `nni.report_final_result` in the doc of [Write a trial run on NNI](https://nni.readthedocs.io/en/latest/TrialExample/Trials.html)
NNI provides state-of-the-art tuning algorithms within our builtin-assessors and makes them easy to use. Below is a brief overview of NNI's current builtin Assessors.
Note: Click the **Assessor's name** to get each Assessor's installation requirements, suggested usage scenario, and a config example. A link to a detailed description of each algorithm is provided at the end of the suggested scenario for each Assessor.
Currently, we support the following Assessors:
|Assessor|Brief Introduction of Algorithm|
|---|---|
|[__Medianstop__](#MedianStop)|Medianstop is a simple early stopping rule. It stops a pending trial X at step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S. [Reference Paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf)|
|[__Curvefitting__](#Curvefitting)|Curve Fitting Assessor is an LPA (learning, predicting, assessing) algorithm. It stops a pending trial X at step S if the prediction of the final epoch's performance worse than the best final performance in the trial history. In this algorithm, we use 12 curves to fit the accuracy curve. [Reference Paper](http://aad.informatik.uni-freiburg.de/papers/15-IJCAI-Extrapolation_of_Learning_Curves.pdf)|
## Usage of Builtin Assessors
Usage of builtin assessors provided by the NNI SDK requires one to declare the **builtinAssessorName** and **classArgs** in the `config.yml` file. In this part, we will introduce the details of usage and the suggested scenarios, classArg requirements, and an example for each assessor.
Note: Please follow the provided format when writing your `config.yml` file.
<aname="MedianStop"></a>
### Median Stop Assessor
> Builtin Assessor Name: **Medianstop**
**Suggested scenario**
It's applicable in a wide range of performance curves, thus, it can be used in various scenarios to speed up the tuning progress. [Detailed Description](./MedianstopAssessor.md)
**classArgs requirements:**
***optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', assessor will **stop** the trial with smaller expectation. If 'minimize', assessor will **stop** the trial with larger expectation.
***start_step** (*int, optional, default = 0*) - A trial is determined to be stopped or not only after receiving start_step number of reported intermediate results.
**Usage example:**
```yaml
# config.yml
assessor:
builtinAssessorName:Medianstop
classArgs:
optimize_mode:maximize
start_step:5
```
<br>
<aname="Curvefitting"></a>
### Curve Fitting Assessor
> Builtin Assessor Name: **Curvefitting**
**Suggested scenario**
It's applicable in a wide range of performance curves, thus, it can be used in various scenarios to speed up the tuning progress. Even better, it's able to handle and assess curves with similar performance. [Detailed Description](./CurvefittingAssessor.md)
**Note**, according to the original paper, only incremental functions are supported. Therefore this assessor can only be used to maximize optimization metrics. For example, it can be used for accuracy, but not for loss.
**classArgs requirements:**
***epoch_num** (*int, **required***) - The total number of epochs. We need to know the number of epochs to determine which points we need to predict.
***start_step** (*int, optional, default = 6*) - A trial is determined to be stopped or not only after receiving start_step number of reported intermediate results.
***threshold** (*float, optional, default = 0.95*) - The threshold that we use to decide to early stop the worst performance curve. For example: if threshold = 0.95, and the best performance in the history is 0.9, then we will stop the trial who's predicted value is lower than 0.95 * 0.9 = 0.855.
***gap** (*int, optional, default = 1*) - The gap interval between Assessor judgements. For example: if gap = 2, start_step = 6, then we will assess the result when we get 6, 8, 10, 12...intermediate results.
The Curve Fitting Assessor is an LPA (learning, predicting, assessing) algorithm. It stops a pending trial X at step S if the prediction of the final epoch's performance is worse than the best final performance in the trial history.
In this algorithm, we use 12 curves to fit the learning curve. The set of parametric curve models are chosen from this [reference paper][1]. The learning curves' shape coincides with our prior knowledge about the form of learning curves: They are typically increasing, saturating functions.
Assuming additive Gaussian noise and the noise parameter being initialized to its maximum likelihood estimate.
We determine the maximum probability value of the new combined parameter vector by learning the historical data. We use such a value to predict future trial performance and stop the inadequate experiments to save computing resources.
Concretely, this algorithm goes through three stages of learning, predicting, and assessing.
* Step1: Learning. We will learn about the trial history of the current trial and determine the \xi at the Bayesian angle. First of all, We fit each curve using the least-squares method, implemented by `fit_theta`. After we obtained the parameters, we filter the curve and remove the outliers, implemented by `filter_curve`. Finally, we use the MCMC sampling method. implemented by `mcmc_sampling`, to adjust the weight of each curve. Up to now, we have determined all the parameters in \xi.
* Step2: Predicting. It calculates the expected final result accuracy, implemented by `f_comb`, at the target position (i.e., the total number of epochs) by \xi and the formula of the combined model.
* Step3: If the fitting result doesn't converge, the predicted value will be `None`. In this case, we return `AssessResult.Good` to ask for future accuracy information and predict again. Furthermore, we will get a positive value from the `predict()` function. If this value is strictly greater than the best final performance in history *`THRESHOLD`(default value = 0.95), return `AssessResult.Good`, otherwise, return `AssessResult.Bad`
The figure below is the result of our algorithm on MNIST trial history data, where the green point represents the data obtained by Assessor, the blue point represents the future but unknown data, and the red line is the Curve predicted by the Curve fitting assessor.

## Usage
To use Curve Fitting Assessor, you should add the following spec in your experiment's YAML config file:
```yaml
assessor:
builtinAssessorName:Curvefitting
classArgs:
# (required)The total number of epoch.
# We need to know the number of epoch to determine which point we need to predict.
epoch_num:20
# (optional) In order to save our computing resource, we start to predict when we have more than only after receiving start_step number of reported intermediate results.
# The default value of start_step is 6.
start_step:6
# (optional) The threshold that we decide to early stop the worse performance curve.
# For example: if threshold = 0.95, best performance in the history is 0.9, then we will stop the trial which predict value is lower than 0.95 * 0.9 = 0.855.
# The default value of threshold is 0.95.
threshold:0.95
# (optional) The gap interval between Assesor judgements.
# For example: if gap = 2, start_step = 6, then we will assess the result when we get 6, 8, 10, 12...intermedian result.
# The default value of gap is 1.
gap:1
```
## Limitation
According to the original paper, only incremental functions are supported. Therefore this assessor can only be used to maximize optimization metrics. For example, it can be used for accuracy, but not for loss.
## File Structure
The assessor has a lot of different files, functions, and classes. Here we briefly describe a few of them.
*`curvefunctions.py` includes all the function expressions and default parameters.
*`modelfactory.py` includes learning and predicting; the corresponding calculation part is also implemented here.
*`curvefitting_assessor.py` is the assessor which receives the trial history and assess whether to early stop the trial.
## TODO
* Further improve the accuracy of the prediction and test it on more models.
NNI supports to build an assessor by yourself for tuning demand.
If you want to implement a customized Assessor, there are three things to do:
1. Inherit the base Assessor class
1. Implement assess_trial function
1. Configure your customized Assessor in experiment YAML config file
**1. Inherit the base Assessor class**
```python
fromnni.assessorimportAssessor
classCustomizedAssessor(Assessor):
def__init__(self,...):
...
```
**2. Implement assess trial function**
```python
fromnni.assessorimportAssessor,AssessResult
classCustomizedAssessor(Assessor):
def__init__(self,...):
...
defassess_trial(self,trial_history):
"""
Determines whether a trial should be killed. Must override.
trial_history: a list of intermediate result objects.
Returns AssessResult.Good or AssessResult.Bad.
"""
# you code implement here.
...
```
**3. Configure your customized Assessor in experiment YAML config file**
NNI needs to locate your customized Assessor class and instantiate the class, so you need to specify the location of the customized Assessor class and pass literal values as parameters to the \_\_init__ constructor.
```yaml
assessor:
codeDir:/home/abc/myassessor
classFileName:my_customized_assessor.py
className:CustomizedAssessor
# Any parameter need to pass to your Assessor class __init__ constructor
# can be specified in this optional classArgs field, for example
classArgs:
arg1:value1
```
Please noted in **2**. The object `trial_history` are exact the object that Trial send to Assessor by using SDK `report_intermediate_result` function.
The working directory of your assessor is `<home>/nni-experiments/<experiment_id>/log`, which can be retrieved with environment variable `NNI_LOG_DIRECTORY`,
Medianstop is a simple early stopping rule mentioned in this [paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf). It stops a pending trial X after step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S.
NNI's command line tool __nnictl__ support auto-completion, i.e., you can complete a nnictl command by pressing the `tab` key.
For example, if the current command is
```
nnictl cre
```
By pressing the `tab` key, it will be completed to
```
nnictl create
```
For now, auto-completion will not be enabled by default if you install NNI through `pip`, and it only works on Linux with bash shell. If you want to enable this feature on your computer, please refer to the following steps:
Here, {nni-version} should by replaced by the version of NNI, e.g., `master`, `v1.9`. You can also check the latest `bash-completion` script [here](https://github.com/microsoft/nni/blob/v1.9/tools/bash-completion).
### Step 2. Install the script
If you are running a root account and want to install this script for all the users
In this example, all the algorithms are used with default parameters. For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process.
## RocksDB Benchmark 'fillrandom' and 'readrandom'
### Problem Description
[DB_Bench](<https://github.com/facebook/rocksdb/wiki/Benchmarking-tools>) is the main tool that is used to benchmark [RocksDB](https://rocksdb.org/)'s performance. It has so many hapermeter to tune.
The performance of `DB_Bench` is associated with the machine configuration and installation method. We run the `DB_Bench`in the Linux machine and install the Rock in shared library.
#### Machine configuration
```
RocksDB: version 6.1
CPU: 6 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
CPUCache: 35840 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
```
#### Storage performance
**Latency**: each IO request will take some time to complete, this is called the average latency. There are several factors that would affect this time including network connection quality and hard disk IO performance.
**IOPS**: **IO operations per second**, which means the amount of _read or write operations_ that could be done in one seconds time.
**IO size**: **the size of each IO request**. Depending on the operating system and the application/service that needs disk access it will issue a request to read or write a certain amount of data at the same time.
**Throughput (in MB/s) = Average IO size x IOPS **
IOPS is related to online processing ability and we use the IOPS as the metric in my experiment.
### Search Space
```json
{
"max_background_compactions":{
"_type":"quniform",
"_value":[1,256,1]
},
"block_size":{
"_type":"quniform",
"_value":[1,500000,1]
},
"write_buffer_size":{
"_type":"quniform",
"_value":[1,130000000,1]
},
"max_write_buffer_number":{
"_type":"quniform",
"_value":[1,128,1]
},
"min_write_buffer_number_to_merge":{
"_type":"quniform",
"_value":[1,32,1]
},
"level0_file_num_compaction_trigger":{
"_type":"quniform",
"_value":[1,256,1]
},
"level0_slowdown_writes_trigger":{
"_type":"quniform",
"_value":[1,1024,1]
},
"level0_stop_writes_trigger":{
"_type":"quniform",
"_value":[1,1024,1]
},
"cache_size":{
"_type":"quniform",
"_value":[1,30000000,1]
},
"compaction_readahead_size":{
"_type":"quniform",
"_value":[1,30000000,1]
},
"new_table_reader_for_compaction_inputs":{
"_type":"randint",
"_value":[1]
}
}
```
The search space is enormous (about 10^40) and we set the maximum number of trial to 100 to limit the computation resource.
### Results
#### fillrandom' Benchmark
| Model | Best IOPS (Repeat 1) | Best IOPS (Repeat 2) | Best IOPS (Repeat 3) |
The sparsity of each layer is set the same as the overall sparsity in this experiment.
- Only **filter pruning** performances are compared here.
For the pruners with scheduling, `L1Filter Pruner` is used as the base algorithm. That is to say, after the sparsities distribution is decided by the scheduling algorithm, `L1Filter Pruner` is used to performn real pruning.
- All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/Overview.md).
## Experiment Result
For each dataset/model/pruner combination, we prune the model to different levels by setting a series of target sparsities for the pruner.
Here we plot both **Number of Weights - Performances** curve and **FLOPs - Performance** curve.
As a reference, we also plot the result declared in the paper [AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates](http://arxiv.org/abs/1907.03141) for models VGG16 and ResNet18 on CIFAR-10.
The experiment result are shown in the following figures:
From the experiment result, we get the following conclusions:
* Given the constraint on the number of parameters, the pruners with scheduling ( `AutoCompress Pruner` , `SimualatedAnnealing Pruner` ) performs better than the others when the constraint is strict. However, they have no such advantage in FLOPs/Performances comparison since only number of parameters constraint is considered in the optimization process;
* The basic algorithms `L1Filter Pruner` , `L2Filter Pruner` , `FPGM Pruner` performs very similarly in these experiments;
*`NetAdapt Pruner` can not achieve very high compression rate. This is caused by its mechanism that it prunes only one layer each pruning iteration. This leads to un-acceptable complexity if the sparsity per iteration is much lower than the overall sparisity constraint.
## Experiments Reproduction
### Implementation Details
* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/v1.9/docs/en_US/Compression/ModelSpeedup.md).
This avoids potential issues of counting them of masked models.
* The experiment code can be found [here](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/auto_pruners_torch.py).
### Experiment Result Rendering
* If you follow the practice in the [example](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows:
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/v1.9/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.
## Contribution
### TODO Items
* Pruners constrained by FLOPS/latency
* More pruning algorithms/datasets/models
### Issues
For algorithm implementation & experiment issues, please [create an issue](https://github.com/microsoft/nni/issues/new/).
# NNI review article from Zhihu: <an open source project with highly reasonable design> - By Garvin Li
The article is by a NNI user on Zhihu forum. In the article, Garvin had shared his experience on using NNI for Automatic Feature Engineering. We think this article is very useful for users who are interested in using NNI for feature engineering. With author's permission, we translated the original article into English.
In general, most of Microsoft tools have one prominent characteristic: the
design is highly reasonable (regardless of the technology innovation degree).
NNI's AutoFeatureENG basically meets all user requirements of AutoFeatureENG
with a very reasonable underlying framework design.
## 03 Details of NNI-AutoFeatureENG
>The article is following the github project: [https://github.com/SpongebBob/tabular_automl_NNI](https://github.com/SpongebBob/tabular_automl_NNI).
Each new user could do AutoFeatureENG with NNI easily and efficiently. To exploring the AutoFeatureENG capability, downloads following required files, and then run NNI install through pip.
NNI treats AutoFeatureENG as a two-steps-task, feature generation exploration and feature selection. Feature generation exploration is mainly about feature derivation and high-order feature combination.
## 04 Feature Exploration
For feature derivation, NNI offers many operations which could automatically generate new features, which list [as following](https://github.com/SpongebBob/tabular_automl_NNI/blob/master/AutoFEOp.md) :
**count**: Count encoding is based on replacing categories with their counts computed on the train set, also named frequency encoding.
**target**: Target encoding is based on encoding categorical variable values with the mean of target variable per value.
**embedding**: Regard features as sentences, generate vectors using *Word2Vec.*
**crosscout**: Count encoding on more than one-dimension, alike CTR (Click Through Rate).
**aggregete**: Decide the aggregation functions of the features, including min/max/mean/var.
**nunique**: Statistics of the number of unique features.
**histsta**: Statistics of feature buckets, like histogram statistics.
Search space could be defined in a **JSON file**: to define how specific features intersect, which two columns intersect and how features generate from corresponding columns.
The picture shows us the procedure of defining search space. NNI provides count encoding for 1-order-op, as well as cross count encoding, aggerate statistics (min max var mean median nunique) for 2-order-op.
For example, we want to search the features which are a frequency encoding (valuecount) features on columns name {“C1”, ...,” C26”}, in the following way:
The purpose of Exploration is to generate new features. You can use **get_next_parameter** function to get received feature candidates of one trial.
>RECEIVED_PARAMS = nni.get_next_parameter()
## 05 Feature selection
To avoid feature explosion and overfitting, feature selection is necessary. In the feature selection of NNI-AutoFeatureENG, LightGBM (Light Gradient Boosting Machine), a gradient boosting framework developed by Microsoft, is mainly promoted.
If you have used **XGBoost** or **GBDT**, you would know the algorithm based on tree structure can easily calculate the importance of each feature on results. LightGBM is able to make feature selection naturally.
The issue is that selected features might be applicable to *GBDT* (Gradient Boosting Decision Tree), but not to the linear algorithm like *LR* (Logistic Regression).
NNI's AutoFeatureEng sets a well-established standard, showing us the operation procedure, available modules, which is highly convenient to use. However, a simple model is probably not enough for good results.
## Suggestions to NNI
About Exploration: If consider using DNN (like xDeepFM) to extract high-order feature would be better.
About Selection: There could be more intelligent options, such as automatic selection system based on downstream models.
Conclusion: NNI could offer users some inspirations of design and it is a good open source project. I suggest researchers leverage it to accelerate the AI research.
Tips: Because the scripts of open source projects are compiled based on gcc7, Mac system may encounter problems of gcc (GNU Compiler Collection). The solution is as follows: