Unverified Commit e9040c9b authored by chicm-ms's avatar chicm-ms Committed by GitHub
Browse files

Merge pull request #23 from microsoft/master

pull code
parents 256f27af ed63175c
...@@ -17,9 +17,9 @@ ...@@ -17,9 +17,9 @@
NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments. NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments.
The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud. The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.
### **NNI [v0.8](https://github.com/Microsoft/nni/releases) has been released!** ### **NNI [v0.9](https://github.com/Microsoft/nni/releases) has been released! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
<p align="center"> <p align="center">
<a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a> <a href="#nni-has-been-released"><img src="docs/img/overview.svg" /></a>
</p> </p>
<table> <table>
<tbody> <tbody>
...@@ -66,6 +66,7 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search ...@@ -66,6 +66,7 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
<li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li> <li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li>
<li><a href="docs/en_US/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li> <li><a href="docs/en_US/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li>
<li><a href="docs/en_US/BuiltinTuner.md#BOHB">BOHB</a></li> <li><a href="docs/en_US/BuiltinTuner.md#BOHB">BOHB</a></li>
<li><a href="docs/en_US/BuiltinTuner.md#GPTuner">GP Tuner</a></li>
</ul> </ul>
<a href="docs/en_US/BuiltinAssessor.md">Assessor</a> <a href="docs/en_US/BuiltinAssessor.md">Assessor</a>
<ul> <ul>
...@@ -137,7 +138,7 @@ Linux and MacOS ...@@ -137,7 +138,7 @@ Linux and MacOS
* Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`. * Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`.
```bash ```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git git clone -b v0.9 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
``` ```
...@@ -147,7 +148,7 @@ Windows ...@@ -147,7 +148,7 @@ Windows
* Run the following commands in an environment that has `python >=3.5`, `git` and `PowerShell` * Run the following commands in an environment that has `python >=3.5`, `git` and `PowerShell`
```bash ```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git git clone -b v0.9 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell -ExecutionPolicy Bypass -file install.ps1 powershell -ExecutionPolicy Bypass -file install.ps1
``` ```
...@@ -163,7 +164,7 @@ The following example is an experiment built on TensorFlow. Make sure you have * ...@@ -163,7 +164,7 @@ The following example is an experiment built on TensorFlow. Make sure you have *
* Download the examples via clone the source code. * Download the examples via clone the source code.
```bash ```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git git clone -b v0.9 https://github.com/Microsoft/nni.git
``` ```
Linux and MacOS Linux and MacOS
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。 NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。
### **NNI [v0.8](https://github.com/Microsoft/nni/releases) 已发布!** ### **NNI [v0.9](https://github.com/Microsoft/nni/releases) 已发布!**
<p align="center"> <p align="center">
<a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a> <a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a>
...@@ -61,11 +61,12 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -61,11 +61,12 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
<li><a href="examples/tuners/enas_nni/README_zh_CN.md">ENAS</a></li> <li><a href="examples/tuners/enas_nni/README_zh_CN.md">ENAS</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li> <li><a href="docs/zh_CN/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#BOHB">BOHB</a></li> <li><a href="docs/zh_CN/BuiltinTuner.md#BOHB">BOHB</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#GPTuner">GP Tuner</a></li>
</ul> </ul>
<a href="docs/zh_CN/BuiltinAssessors.md">Assessor(评估器)</a> <a href="docs/zh_CN/BuiltinAssessor.md">Assessor(评估器)</a>
<ul> <ul>
<li><a href="docs/zh_CN/BuiltinAssessors.md#Medianstop">Median Stop</a></li> <li><a href="docs/zh_CN/BuiltinAssessor.md#Medianstop">Median Stop</a></li>
<li><a href="docs/zh_CN/BuiltinAssessors.md#Curvefitting">Curve Fitting</a></li> <li><a href="docs/zh_CN/BuiltinAssessor.md#Curvefitting">Curve Fitting</a></li>
</ul> </ul>
</td> </td>
<td> <td>
...@@ -101,12 +102,6 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -101,12 +102,6 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
## **安装和验证** ## **安装和验证**
在 Windows 本机模式下,并且是第一次使用 PowerShell 来运行脚本,需要**使用管理员权限**运行一次下列命令:
```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
```
**通过 pip 命令安装** **通过 pip 命令安装**
* 当前支持 Linux,MacOS 和 Windows(本机,远程,OpenPAI 模式),在 Ubuntu 16.04 或更高版本,MacOS 10.14.1 以及 Windows 10.1809 上进行了测试。 在 `python >= 3.5` 的环境中,只需要运行 `pip install` 即可完成安装。 * 当前支持 Linux,MacOS 和 Windows(本机,远程,OpenPAI 模式),在 Ubuntu 16.04 或更高版本,MacOS 10.14.1 以及 Windows 10.1809 上进行了测试。 在 `python >= 3.5` 的环境中,只需要运行 `pip install` 即可完成安装。
...@@ -131,14 +126,14 @@ python -m pip install --upgrade nni ...@@ -131,14 +126,14 @@ python -m pip install --upgrade nni
**通过源代码安装** **通过源代码安装**
* 当前支持 Linux(Ubuntu 16.04 或更高版本),MacOS(10.14.1)以及 Windows 10(1809 版)。 * 当前支持 Linux(Ubuntu 16.04 或更高版本),MacOS(10.14.1)以及 Windows 10(1809 版)。
Linux 和 macOS Linux 和 macOS
*`python >= 3.5` 的环境中运行命令: `git``wget`,确保安装了这两个组件。 *`python >= 3.5` 的环境中运行命令: `git``wget`,确保安装了这两个组件。
```bash ```bash
git clone -b v0.7 https://github.com/Microsoft/nni.git git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
``` ```
...@@ -148,9 +143,9 @@ Windows ...@@ -148,9 +143,9 @@ Windows
*`python >=3.5` 的环境中运行命令: `git``PowerShell`,确保安装了这两个组件。 *`python >=3.5` 的环境中运行命令: `git``PowerShell`,确保安装了这两个组件。
```bash ```bash
git clone -b v0.7 https://github.com/Microsoft/nni.git git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell .\install.ps1 powershell -ExecutionPolicy Bypass -file install.ps1
``` ```
参考[安装 NNI](docs/zh_CN/Installation.md) 了解系统需求。 参考[安装 NNI](docs/zh_CN/Installation.md) 了解系统需求。
...@@ -164,7 +159,7 @@ Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/NniOnWindows.md)。 ...@@ -164,7 +159,7 @@ Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/NniOnWindows.md)。
* 通过克隆源代码下载示例。 * 通过克隆源代码下载示例。
```bash ```bash
git clone -b v0.7 https://github.com/Microsoft/nni.git git clone -b v0.8 https://github.com/Microsoft/nni.git
``` ```
Linux 和 macOS Linux 和 macOS
......
# Built-in Tuners # Built-in Tuners
NNI provides state-of-the-art tuning algorithm as our builtin-tuners and makes them easy to use. Below is the brief summary of NNI currently built-in Tuners: NNI provides state-of-the-art tuning algorithm as our built-in tuners and makes them easy to use. Below is the brief summary of NNI currently built-in tuners:
Note: Click the **Tuner's name** to get the Tuner's installation requirements, suggested scenario and using example. The link for a detailed description of the algorithm is at the end of the suggested scenario of each tuner. Here is an [article](./CommunitySharings/HpoComparision.md) about the comparison of different Tuners on several problems. Note: Click the **Tuner's name** to get the Tuner's installation requirements, suggested scenario and using example. The link for a detailed description of the algorithm is at the end of the suggested scenario of each tuner. Here is an [article](./CommunitySharings/HpoComparision.md) about the comparison of different Tuners on several problems.
...@@ -11,28 +11,27 @@ Currently we support the following algorithms: ...@@ -11,28 +11,27 @@ Currently we support the following algorithms:
|[__TPE__](#TPE)|The Tree-structured Parzen Estimator (TPE) is a sequential model-based optimization (SMBO) approach. SMBO methods sequentially construct models to approximate the performance of hyperparameters based on historical measurements, and then subsequently choose new hyperparameters to test based on this model. [Reference Paper](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)| |[__TPE__](#TPE)|The Tree-structured Parzen Estimator (TPE) is a sequential model-based optimization (SMBO) approach. SMBO methods sequentially construct models to approximate the performance of hyperparameters based on historical measurements, and then subsequently choose new hyperparameters to test based on this model. [Reference Paper](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)|
|[__Random Search__](#Random)|In Random Search for Hyper-Parameter Optimization show that Random Search might be surprisingly simple and effective. We suggest that we could use Random Search as the baseline when we have no knowledge about the prior distribution of hyper-parameters. [Reference Paper](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf)| |[__Random Search__](#Random)|In Random Search for Hyper-Parameter Optimization show that Random Search might be surprisingly simple and effective. We suggest that we could use Random Search as the baseline when we have no knowledge about the prior distribution of hyper-parameters. [Reference Paper](http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf)|
|[__Anneal__](#Anneal)|This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive.| |[__Anneal__](#Anneal)|This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive.|
|[__Naive Evolution__](#Evolution)|Naive Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naive Evolution requires many trials to works, but it's very simple and easy to expand new features. [Reference paper](https://arxiv.org/pdf/1703.01041.pdf)| |[__Naïve Evolution__](#Evolution)|Naïve Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to works, but it's very simple and easy to expand new features. [Reference paper](https://arxiv.org/pdf/1703.01041.pdf)|
|[__SMAC__](#SMAC)|SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by nni is a wrapper on the SMAC3 Github repo. Notice, SMAC need to be installed by `nnictl package` command. [Reference Paper,](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) [Github Repo](https://github.com/automl/SMAC3)| |[__SMAC__](#SMAC)|SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo. Notice, SMAC need to be installed by `nnictl package` command. [Reference Paper,](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) [GitHub Repo](https://github.com/automl/SMAC3)|
|[__Batch tuner__](#Batch)|Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec.| |[__Batch tuner__](#Batch)|Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec.|
|[__Grid Search__](#GridSearch)|Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are choice, quniform, qloguniform. The number q in quniform and qloguniform has special meaning (different from the spec in search space spec). It means the number of values that will be sampled evenly from the range low and high.| |[__Grid Search__](#GridSearch)|Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are choice, quniform, qloguniform. The number q in quniform and qloguniform has special meaning (different from the spec in search space spec). It means the number of values that will be sampled evenly from the range low and high.|
|[__Hyperband__](#Hyperband)|Hyperband tries to use the limited resource to explore as many configurations as possible, and finds out the promising ones to get the final result. The basic idea is generating many configurations and to run them for the small number of trial budget to find out promising one, then further training those promising ones to select several more promising one.[Reference Paper](https://arxiv.org/pdf/1603.06560.pdf)| |[__Hyperband__](#Hyperband)|Hyperband tries to use the limited resource to explore as many configurations as possible, and finds out the promising ones to get the final result. The basic idea is generating many configurations and to run them for the small number of trial budget to find out promising one, then further training those promising ones to select several more promising one.[Reference Paper](https://arxiv.org/pdf/1603.06560.pdf)|
|[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)| |[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)|
|[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)| |[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)|
|[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)| |[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Bayesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)|
|[__GP Tuner__](#GPTuner)|Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. [Reference Paper](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf), [Github Repo](https://github.com/fmfn/BayesianOptimization)|
<br> ## Usage of Built-in Tuners
## Usage of Builtin Tuners
Use builtin tuner provided by NNI SDK requires to declare the **builtinTunerName** and **classArgs** in `config.yml` file. In this part, we will introduce the detailed usage about the suggested scenarios, classArg requirements and example for each tuner. Use built-in tuner provided by NNI SDK requires to declare the **builtinTunerName** and **classArgs** in `config.yml` file. In this part, we will introduce the detailed usage about the suggested scenarios, classArg requirements and example for each tuner.
Note: Please follow the format when you write your `config.yml` file. Some builtin tuner need to be installed by `nnictl package`, like SMAC. Note: Please follow the format when you write your `config.yml` file. Some built-in tuner need to be installed by `nnictl package`, like SMAC.
<a name="TPE"></a> <a name="TPE"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `TPE` ![](https://placehold.it/15/1589F0/000000?text=+) `TPE`
> Builtin Tuner Name: **TPE** > Built-in Tuner Name: **TPE**
**Suggested scenario** **Suggested scenario**
...@@ -59,7 +58,7 @@ tuner: ...@@ -59,7 +58,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Random Search` ![](https://placehold.it/15/1589F0/000000?text=+) `Random Search`
> Builtin Tuner Name: **Random** > Built-in Tuner Name: **Random**
**Suggested scenario** **Suggested scenario**
...@@ -83,7 +82,7 @@ tuner: ...@@ -83,7 +82,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Anneal` ![](https://placehold.it/15/1589F0/000000?text=+) `Anneal`
> Builtin Tuner Name: **Anneal** > Built-in Tuner Name: **Anneal**
**Suggested scenario** **Suggested scenario**
...@@ -108,9 +107,9 @@ tuner: ...@@ -108,9 +107,9 @@ tuner:
<a name="Evolution"></a> <a name="Evolution"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `Naive Evolution` ![](https://placehold.it/15/1589F0/000000?text=+) `Naïve Evolution`
> Builtin Tuner Name: **Evolution** > Built-in Tuner Name: **Evolution**
**Suggested scenario** **Suggested scenario**
...@@ -133,9 +132,9 @@ tuner: ...@@ -133,9 +132,9 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `SMAC` ![](https://placehold.it/15/1589F0/000000?text=+) `SMAC`
> Builtin Tuner Name: **SMAC** > Built-in Tuner Name: **SMAC**
**Please note that SMAC doesn't support running on windows currently. The specific reason can be referred to this [github issue](https://github.com/automl/SMAC3/issues/483).** **Please note that SMAC doesn't support running on windows currently. The specific reason can be referred to this [GitHub issue](https://github.com/automl/SMAC3/issues/483).**
**Installation** **Installation**
...@@ -169,7 +168,7 @@ tuner: ...@@ -169,7 +168,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Batch Tuner` ![](https://placehold.it/15/1589F0/000000?text=+) `Batch Tuner`
> Builtin Tuner Name: BatchTuner > Built-in Tuner Name: BatchTuner
**Suggested scenario** **Suggested scenario**
...@@ -208,7 +207,7 @@ The search space file including the high-level key `combine_params`. The type of ...@@ -208,7 +207,7 @@ The search space file including the high-level key `combine_params`. The type of
![](https://placehold.it/15/1589F0/000000?text=+) `Grid Search` ![](https://placehold.it/15/1589F0/000000?text=+) `Grid Search`
> Builtin Tuner Name: **Grid Search** > Built-in Tuner Name: **Grid Search**
**Suggested scenario** **Suggested scenario**
...@@ -230,7 +229,7 @@ tuner: ...@@ -230,7 +229,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Hyperband` ![](https://placehold.it/15/1589F0/000000?text=+) `Hyperband`
> Builtin Advisor Name: **Hyperband** > Built-in Advisor Name: **Hyperband**
**Suggested scenario** **Suggested scenario**
...@@ -260,11 +259,11 @@ advisor: ...@@ -260,11 +259,11 @@ advisor:
![](https://placehold.it/15/1589F0/000000?text=+) `Network Morphism` ![](https://placehold.it/15/1589F0/000000?text=+) `Network Morphism`
> Builtin Tuner Name: **NetworkMorphism** > Built-in Tuner Name: **NetworkMorphism**
**Installation** **Installation**
NetworkMorphism requires [pyTorch](https://pytorch.org/get-started/locally), so users should install it first. NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally), so users should install it first.
**Suggested scenario** **Suggested scenario**
...@@ -298,13 +297,13 @@ tuner: ...@@ -298,13 +297,13 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `Metis Tuner` ![](https://placehold.it/15/1589F0/000000?text=+) `Metis Tuner`
> Builtin Tuner Name: **MetisTuner** > Built-in Tuner Name: **MetisTuner**
Note that the only acceptable types of search space are `choice`, `quniform`, `uniform` and `randint`. Note that the only acceptable types of search space are `choice`, `quniform`, `uniform` and `randint`.
**Suggested scenario** **Suggested scenario**
Similar to TPE and SMAC, Metis is a black-box tuner. If your system takes a long time to finish each trial, Metis is more favorable than other approaches such as random search. Furthermore, Metis provides guidance on the subsequent trial. Here is an [example](https://github.com/Microsoft/nni/tree/master/examples/trials/auto-gbdt/search_space_metis.json) about the use of Metis. User only need to send the final result like `accuracy` to tuner, by calling the nni SDK. [Detailed Description](./MetisTuner.md) Similar to TPE and SMAC, Metis is a black-box tuner. If your system takes a long time to finish each trial, Metis is more favorable than other approaches such as random search. Furthermore, Metis provides guidance on the subsequent trial. Here is an [example](https://github.com/Microsoft/nni/tree/master/examples/trials/auto-gbdt/search_space_metis.json) about the use of Metis. User only need to send the final result like `accuracy` to tuner, by calling the NNI SDK. [Detailed Description](./MetisTuner.md)
**Requirement of classArg** **Requirement of classArg**
...@@ -326,7 +325,7 @@ tuner: ...@@ -326,7 +325,7 @@ tuner:
![](https://placehold.it/15/1589F0/000000?text=+) `BOHB Advisor` ![](https://placehold.it/15/1589F0/000000?text=+) `BOHB Advisor`
> Builtin Tuner Name: **BOHB** > Built-in Tuner Name: **BOHB**
**Installation** **Installation**
...@@ -338,7 +337,7 @@ nnictl package install --name=BOHB ...@@ -338,7 +337,7 @@ nnictl package install --name=BOHB
**Suggested scenario** **Suggested scenario**
Similar to Hyperband, it is suggested when you have limited computation resource but have relatively large search space. It performs well in the scenario that intermediate result (e.g., accuracy) can reflect good or bad of final result (e.g., accuracy) to some extent. In this case, it may converges to a better configuration due to bayesian optimization usage. [Detailed Description](./BohbAdvisor.md) Similar to Hyperband, it is suggested when you have limited computation resource but have relatively large search space. It performs well in the scenario that intermediate result (e.g., accuracy) can reflect good or bad of final result (e.g., accuracy) to some extent. In this case, it may converges to a better configuration due to Bayesian optimization usage. [Detailed Description](./BohbAdvisor.md)
**Requirement of classArg** **Requirement of classArg**
...@@ -366,3 +365,45 @@ advisor: ...@@ -366,3 +365,45 @@ advisor:
max_budget: 27 max_budget: 27
eta: 3 eta: 3
``` ```
<a name="GPTuner"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `GP Tuner`
> Built-in Tuner Name: **GPTuner**
Note that the only acceptable types of search space are `choice`, `randint`, `uniform`, `quniform`, `loguniform`, `qloguniform`.
**Suggested scenario**
As a strategy in Sequential Model-based Global Optimization(SMBO) algorithm, GP Tuner uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore GP Tuner is most adequate for situations where the function to be optimized is a very expensive endeavor. GP can be used when the computation resource is limited. While GP Tuner has a computational cost that grows at *O(N^3)* due to the requirement of inverting the Gram matrix, so it's not suitable when lots of trials are needed. [Detailed Description](./GPTuner.md)
**Requirement of classArg**
* **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.
* **utility** (*'ei', 'ucb' or 'poi', optional, default = 'ei'*) - The kind of utility function(acquisition function). 'ei', 'ucb' and 'poi' corresponds to 'Expected Improvement', 'Upper Confidence Bound' and 'Probability of Improvement' respectively.
* **kappa** (*float, optional, default = 5*) - Used by utility function 'ucb'. The bigger `kappa` is, the more the tuner will be exploratory.
* **xi** (*float, optional, default = 0*) - Used by utility function 'ei' and 'poi'. The bigger `xi` is, the more the tuner will be exploratory.
* **nu** (*float, optional, default = 2.5*) - Used to specify Matern kernel. The smaller nu, the less smooth the approximated function is.
* **alpha** (*float, optional, default = 1e-6*) - Used to specify Gaussian Process Regressor. Larger values correspond to increased noise level in the observations.
* **cold_start_num** (*int, optional, default = 10*) - Number of random exploration to perform before Gaussian Process. Random exploration can help by diversifying the exploration space.
* **selection_num_warm_up** (*int, optional, default = 1e5*) - Number of random points to evaluate for getting the point which maximizes the acquisition function.
* **selection_num_starting_points** (*int, optional, default = 250*) - Number of times to run L-BFGS-B from a random starting point after the warmup.
**Usage example**
```yaml
# config.yml
tuner:
builtinTunerName: GPTuner
classArgs:
optimize_mode: maximize
utility: 'ei'
kappa: 5.0
xi: 0.0
nu: 2.5
alpha: 1e-6
cold_start_num: 10
selection_num_warm_up: 100000
selection_num_starting_points: 250
```
...@@ -98,8 +98,11 @@ The total search space is 1,204,224, we set the number of maximum trial to 1000. ...@@ -98,8 +98,11 @@ The total search space is 1,204,224, we set the number of maximum trial to 1000.
| HyperBand |0.414065|0.415222|0.417628| | HyperBand |0.414065|0.415222|0.417628|
| HyperBand |0.416807|0.417549|0.418828| | HyperBand |0.416807|0.417549|0.418828|
| HyperBand |0.415550|0.415977|0.417186| | HyperBand |0.415550|0.415977|0.417186|
| GP |0.414353|0.418563|0.420263|
| GP |0.414395|0.418006|0.420431|
| GP |0.412943|0.416566|0.418443|
For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process. In this example, all the algorithms are used with default parameters. For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process.
## RocksDB Benchmark 'fillrandom' and 'readrandom' ## RocksDB Benchmark 'fillrandom' and 'readrandom'
......
...@@ -4,10 +4,10 @@ A config file is needed when create an experiment, the path of the config file i ...@@ -4,10 +4,10 @@ A config file is needed when create an experiment, the path of the config file i
The config file is written in YAML format, and need to be written correctly. The config file is written in YAML format, and need to be written correctly.
This document describes the rule to write config file, and will provide some examples and templates. This document describes the rule to write config file, and will provide some examples and templates.
- [Experiment config reference](#experiment-config-reference) - [Experiment config reference](#Experiment-config-reference)
- [Template](#template) - [Template](#Template)
- [Configuration spec](#configuration-spec) - [Configuration spec](#Configuration-spec)
- [Examples](#examples) - [Examples](#Examples)
<a name="Template"></a> <a name="Template"></a>
## Template ## Template
...@@ -23,8 +23,12 @@ maxTrialNum: ...@@ -23,8 +23,12 @@ maxTrialNum:
#choice: local, remote, pai, kubeflow #choice: local, remote, pai, kubeflow
trainingServicePlatform: trainingServicePlatform:
searchSpacePath: searchSpacePath:
#choice: true, false #choice: true, false, default: false
useAnnotation: useAnnotation:
#choice: true, false, default: false
multiPhase:
#choice: true, false, default: false
multiThread:
tuner: tuner:
#choice: TPE, Random, Anneal, Evolution #choice: TPE, Random, Anneal, Evolution
builtinTunerName: builtinTunerName:
...@@ -55,8 +59,12 @@ maxTrialNum: ...@@ -55,8 +59,12 @@ maxTrialNum:
#choice: local, remote, pai, kubeflow #choice: local, remote, pai, kubeflow
trainingServicePlatform: trainingServicePlatform:
searchSpacePath: searchSpacePath:
#choice: true, false #choice: true, false, default: false
useAnnotation: useAnnotation:
#choice: true, false, default: false
multiPhase:
#choice: true, false, default: false
multiThread:
tuner: tuner:
#choice: TPE, Random, Anneal, Evolution #choice: TPE, Random, Anneal, Evolution
builtinTunerName: builtinTunerName:
...@@ -93,8 +101,12 @@ maxExecDuration: ...@@ -93,8 +101,12 @@ maxExecDuration:
maxTrialNum: maxTrialNum:
#choice: local, remote, pai, kubeflow #choice: local, remote, pai, kubeflow
trainingServicePlatform: trainingServicePlatform:
#choice: true, false #choice: true, false, default: false
useAnnotation: useAnnotation:
#choice: true, false, default: false
multiPhase:
#choice: true, false, default: false
multiThread:
tuner: tuner:
#choice: TPE, Random, Anneal, Evolution #choice: TPE, Random, Anneal, Evolution
builtinTunerName: builtinTunerName:
...@@ -128,12 +140,14 @@ machineList: ...@@ -128,12 +140,14 @@ machineList:
* Description * Description
__authorName__ is the name of the author who create the experiment. __authorName__ is the name of the author who create the experiment.
TBD: add default value
TBD: add default value
* __experimentName__ * __experimentName__
* Description * Description
__experimentName__ is the name of the experiment created. __experimentName__ is the name of the experiment created.
TBD: add default value TBD: add default value
* __trialConcurrency__ * __trialConcurrency__
...@@ -153,7 +167,7 @@ machineList: ...@@ -153,7 +167,7 @@ machineList:
* __versionCheck__ * __versionCheck__
* Description * Description
NNI will check the version of nniManager process and the version of trialKeeper in remote, pai and kubernetes platform. If you want to disable version check, you could set versionCheck be false. NNI will check the version of nniManager process and the version of trialKeeper in remote, pai and kubernetes platform. If you want to disable version check, you could set versionCheck be false.
* __debug__ * __debug__
* Description * Description
...@@ -192,6 +206,16 @@ machineList: ...@@ -192,6 +206,16 @@ machineList:
Note: if set useAnnotation=True, the searchSpacePath field should be removed. Note: if set useAnnotation=True, the searchSpacePath field should be removed.
* __multiPhase__
* Description
__multiPhase__ enable [multi-phase experiment](./MultiPhase.md).
* __multiThread__
* Description
__multiThread__ enable multi-thread mode for dispatcher, if multiThread is set to `true`, dispatcher will start a thread to process each command from NNI Manager.
* __nniManagerIp__ * __nniManagerIp__
* Description * Description
......
**Run an Experiment on FrameworkController** # Run an Experiment on FrameworkController
=== ===
NNI supports running experiment using [FrameworkController](https://github.com/Microsoft/frameworkcontroller), called frameworkcontroller mode. FrameworkController is built to orchestrate all kinds of applications on Kubernetes, you don't need to install kubeflow for specific deeplearning framework like tf-operator or pytorch-operator. Now you can use frameworkcontroller as the training service to run NNI experiment. NNI supports running experiment using [FrameworkController](https://github.com/Microsoft/frameworkcontroller), called frameworkcontroller mode. FrameworkController is built to orchestrate all kinds of applications on Kubernetes, you don't need to install Kubeflow for specific deep learning framework like tf-operator or pytorch-operator. Now you can use FrameworkController as the training service to run NNI experiment.
## Prerequisite for on-premises Kubernetes Service ## Prerequisite for on-premises Kubernetes Service
1. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this [guideline](https://kubernetes.io/docs/setup/) to set up Kubernetes 1. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this [guideline](https://kubernetes.io/docs/setup/) to set up Kubernetes
2. Prepare a **kubeconfig** file, which will be used by NNI to interact with your kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the **KUBECONFIG** environment variable. Refer this [guideline]( https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig) to learn more about kubeconfig. 2. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the **KUBECONFIG** environment variable. Refer this [guideline]( https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig) to learn more about kubeconfig.
3. If your NNI trial job needs GPU resource, you should follow this [guideline](https://github.com/NVIDIA/k8s-device-plugin) to configure **Nvidia device plugin for Kubernetes**. 3. If your NNI trial job needs GPU resource, you should follow this [guideline](https://github.com/NVIDIA/k8s-device-plugin) to configure **Nvidia device plugin for Kubernetes**.
4. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in `root_squash option`, otherwise permission issue may raise when NNI copies files to NFS. Refer this [page](https://linux.die.net/man/5/exports) to learn what root_squash option is), or **Azure File Storage**. 4. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in `root_squash option`, otherwise permission issue may raise when NNI copies files to NFS. Refer this [page](https://linux.die.net/man/5/exports) to learn what root_squash option is), or **Azure File Storage**.
5. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client: 5. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
```
```bash
apt-get install nfs-common apt-get install nfs-common
``` ```
6. Install **NNI**, follow the install guide [here](QuickStart.md). 6. Install **NNI**, follow the install guide [here](QuickStart.md).
## Prerequisite for Azure Kubernetes Service ## Prerequisite for Azure Kubernetes Service
1. NNI support kubeflow based on Azure Kubernetes Service, follow the [guideline](https://azure.microsoft.com/en-us/services/kubernetes-service/) to set up Azure Kubernetes Service.
1. NNI support Kubeflow based on Azure Kubernetes Service, follow the [guideline](https://azure.microsoft.com/en-us/services/kubernetes-service/) to set up Azure Kubernetes Service.
2. Install [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and __kubectl__. Use `az login` to set azure account, and connect kubectl client to AKS, refer this [guideline](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster). 2. Install [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and __kubectl__. Use `az login` to set azure account, and connect kubectl client to AKS, refer this [guideline](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster).
3. Follow the [guideline](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal) to create azure file storage account. If you use Azure Kubernetes Service, NNI need Azure Storage Service to store code files and the output files. 3. Follow the [guideline](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal) to create azure file storage account. If you use Azure Kubernetes Service, NNI need Azure Storage Service to store code files and the output files.
4. To access Azure storage service, NNI need the access key of the storage account, and NNI uses [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) Service to protect your private key. Set up Azure Key Vault Service, add a secret to Key Vault to store the access key of Azure storage account. Follow this [guideline](https://docs.microsoft.com/en-us/azure/key-vault/quick-create-cli) to store the access key. 4. To access Azure storage service, NNI need the access key of the storage account, and NNI uses [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) Service to protect your private key. Set up Azure Key Vault Service, add a secret to Key Vault to store the access key of Azure storage account. Follow this [guideline](https://docs.microsoft.com/en-us/azure/key-vault/quick-create-cli) to store the access key.
## Setup FrameworkController
## Set up FrameworkController Follow the [guideline](https://github.com/Microsoft/frameworkcontroller/tree/master/example/run) to set up FrameworkController in the Kubernetes cluster, NNI supports FrameworkController by the stateful set mode.
Follow the [guideline](https://github.com/Microsoft/frameworkcontroller/tree/master/example/run) to set up frameworkcontroller in the kubernetes cluster, NNI supports frameworkcontroller by the statefulset mode.
## Design ## Design
Please refer the design of [kubeflow training service](./KubeflowMode.md), frameworkcontroller training service pipeline is similar.
Please refer the design of [Kubeflow training service](./KubeflowMode.md), FrameworkController training service pipeline is similar.
## Example ## Example
The frameworkcontroller config file format is: The FrameworkController config file format is:
```
```yaml
authorName: default authorName: default
experimentName: example_mnist experimentName: example_mnist
trialConcurrency: 1 trialConcurrency: 1
...@@ -71,8 +77,10 @@ frameworkcontrollerConfig: ...@@ -71,8 +77,10 @@ frameworkcontrollerConfig:
server: {your_nfs_server} server: {your_nfs_server}
path: {your_nfs_server_exported_path} path: {your_nfs_server_exported_path}
``` ```
If you use Azure Kubernetes Service, you should set `frameworkcontrollerConfig` in your config YAML file as follows: If you use Azure Kubernetes Service, you should set `frameworkcontrollerConfig` in your config YAML file as follows:
```
```yaml
frameworkcontrollerConfig: frameworkcontrollerConfig:
storage: azureStorage storage: azureStorage
keyVault: keyVault:
...@@ -82,22 +90,27 @@ frameworkcontrollerConfig: ...@@ -82,22 +90,27 @@ frameworkcontrollerConfig:
accountName: {your_storage_account_name} accountName: {your_storage_account_name}
azureShare: {your_azure_share_name} azureShare: {your_azure_share_name}
``` ```
Note: You should explicitly set `trainingServicePlatform: frameworkcontroller` in NNI config YAML file if you want to start experiment in frameworkcontrollerConfig mode. Note: You should explicitly set `trainingServicePlatform: frameworkcontroller` in NNI config YAML file if you want to start experiment in frameworkcontrollerConfig mode.
The trial's config format for NNI frameworkcontroller mode is a simple version of frameworkcontroller's offical config, you could refer the [tensorflow example of frameworkcontroller](https://github.com/Microsoft/frameworkcontroller/blob/master/example/framework/scenario/tensorflow/cpu/tensorflowdistributedtrainingwithcpu.yaml) for deep understanding. The trial's config format for NNI frameworkcontroller mode is a simple version of FrameworkController's official config, you could refer the [Tensorflow example of FrameworkController](https://github.com/Microsoft/frameworkcontroller/blob/master/example/framework/scenario/tensorflow/cpu/tensorflowdistributedtrainingwithcpu.yaml) for deep understanding.
Trial configuration in frameworkcontroller mode have the following configuration keys: Trial configuration in frameworkcontroller mode have the following configuration keys:
* taskRoles: you could set multiple task roles in config file, and each task role is a basic unit to process in kubernetes cluster.
* name: the name of task role specified, like "worker", "ps", "master". * taskRoles: you could set multiple task roles in config file, and each task role is a basic unit to process in Kubernetes cluster.
* taskNum: the replica number of the task role. * name: the name of task role specified, like "worker", "ps", "master".
* command: the users' command to be used in the container. * taskNum: the replica number of the task role.
* gpuNum: the number of gpu device used in container. * command: the users' command to be used in the container.
* cpuNum: the number of cpu device used in container. * gpuNum: the number of gpu device used in container.
* memoryMB: the memory limitaion to be specified in container. * cpuNum: the number of cpu device used in container.
* image: the docker image used to create pod and run the program. * memoryMB: the memory limitaion to be specified in container.
* frameworkAttemptCompletionPolicy: the policy to run framework, please refer the [user-manual](https://github.com/Microsoft/frameworkcontroller/blob/master/doc/user-manual.md#frameworkattemptcompletionpolicy) to get the specific information. Users could use the policy to control the pod, for example, if ps does not stop, only worker stops, this completionpolicy could helps stop ps. * image: the docker image used to create pod and run the program.
* frameworkAttemptCompletionPolicy: the policy to run framework, please refer the [user-manual](https://github.com/Microsoft/frameworkcontroller/blob/master/doc/user-manual.md#frameworkattemptcompletionpolicy) to get the specific information. Users could use the policy to control the pod, for example, if ps does not stop, only worker stops, The completion policy could helps stop ps.
## How to run example ## How to run example
After you prepare a config file, you could run your experiment by nnictl. The way to start an experiment on frameworkcontroller is similar to kubeflow, please refer the [document](./KubeflowMode.md) for more information.
After you prepare a config file, you could run your experiment by nnictl. The way to start an experiment on FrameworkController is similar to Kubeflow, please refer the [document](./KubeflowMode.md) for more information.
## version check ## version check
NNI support version check feature in since version 0.6, [refer](PaiMode.md)
\ No newline at end of file NNI support version check feature in since version 0.6, [refer](PaiMode.md)
GP Tuner on NNI
===
## GP Tuner
Bayesian optimization works by constructing a posterior distribution of functions (Gaussian Process here) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not.
GP Tuner is designed to minimize/maximize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor.
This optimization approach is described in Section 3 of [Algorithms for Hyper-Parameter Optimization](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf).
...@@ -2,12 +2,13 @@ ...@@ -2,12 +2,13 @@
=== ===
## Overview ## Overview
TrainingService is a module related to platform management and job schedule in NNI. TrainingService is designed to be easily implemented, we define an abstract class TrainingService as the parent class of all kinds of TrainignService, users just need to inherit the parent class and complete their own clild class if they want to implement customized TrainingService. TrainingService is a module related to platform management and job schedule in NNI. TrainingService is designed to be easily implemented, we define an abstract class TrainingService as the parent class of all kinds of TrainingService, users just need to inherit the parent class and complete their own child class if they want to implement customized TrainingService.
## System architecture ## System architecture
![](../img/NNIDesign.jpg) ![](../img/NNIDesign.jpg)
The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md). The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md).
In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules. In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules.
## Folder structure of code ## Folder structure of code
...@@ -63,6 +64,7 @@ abstract class TrainingService { ...@@ -63,6 +64,7 @@ abstract class TrainingService {
The parent class of TrainingService has a few abstract functions, users need to inherit the parent class and implement all of these abstract functions. The parent class of TrainingService has a few abstract functions, users need to inherit the parent class and implement all of these abstract functions.
__setClusterMetadata(key: string, value: string)__ __setClusterMetadata(key: string, value: string)__
ClusterMetadata is the data related to platform details, for examples, the ClusterMetadata defined in remote machine server is: ClusterMetadata is the data related to platform details, for examples, the ClusterMetadata defined in remote machine server is:
``` ```
export class RemoteMachineMeta { export class RemoteMachineMeta {
...@@ -91,9 +93,11 @@ export class RemoteMachineMeta { ...@@ -91,9 +93,11 @@ export class RemoteMachineMeta {
The metadata includes the host address, the username or other configuration related to the platform. Users need to define their own metadata format, and set the metadata instance in this function. This function is called before the experiment is started to set the configuration of remote machines. The metadata includes the host address, the username or other configuration related to the platform. Users need to define their own metadata format, and set the metadata instance in this function. This function is called before the experiment is started to set the configuration of remote machines.
__getClusterMetadata(key: string)__ __getClusterMetadata(key: string)__
This function will return the metadata value according to the values, it could be left empty if users don't need to use it. This function will return the metadata value according to the values, it could be left empty if users don't need to use it.
__submitTrialJob(form: JobApplicationForm)__ __submitTrialJob(form: JobApplicationForm)__
SubmitTrialJob is a function to submit new trial jobs, users should generate a job instance in TrialJobDetail type. TrialJobDetail is defined as follow: SubmitTrialJob is a function to submit new trial jobs, users should generate a job instance in TrialJobDetail type. TrialJobDetail is defined as follow:
``` ```
interface TrialJobDetail { interface TrialJobDetail {
...@@ -113,37 +117,49 @@ interface TrialJobDetail { ...@@ -113,37 +117,49 @@ interface TrialJobDetail {
According to different kinds of implementation, users could put the job detail into a job queue, and keep fetching the job from the queue and start preparing and running them. Or they could finish preparing and running process in this function, and return job detail after the submit work. According to different kinds of implementation, users could put the job detail into a job queue, and keep fetching the job from the queue and start preparing and running them. Or they could finish preparing and running process in this function, and return job detail after the submit work.
__cancelTrialJob(trialJobId: string, isEarlyStopped?: boolean)__ __cancelTrialJob(trialJobId: string, isEarlyStopped?: boolean)__
If this function is called, the trial started by the platform should be canceled. Different kind of platform has diffenent methods to calcel a running job, this function should be implemented according to specific platform. If this function is called, the trial started by the platform should be canceled. Different kind of platform has diffenent methods to calcel a running job, this function should be implemented according to specific platform.
__updateTrialJob(trialJobId: string, form: JobApplicationForm)__ __updateTrialJob(trialJobId: string, form: JobApplicationForm)__
This function is called to update the trial job's status, trial job's status should be detected according to different platform, and be updated to `RUNNING`, `SUCCEED`, `FAILED` etc. This function is called to update the trial job's status, trial job's status should be detected according to different platform, and be updated to `RUNNING`, `SUCCEED`, `FAILED` etc.
__getTrialJob(trialJobId: string)__ __getTrialJob(trialJobId: string)__
This function returns a trialJob detail instance according to trialJobId. This function returns a trialJob detail instance according to trialJobId.
__listTrialJobs()__ __listTrialJobs()__
Users should put all of trial job detail information into a list, and return the list. Users should put all of trial job detail information into a list, and return the list.
__addTrialJobMetricListener(listener: (metric: TrialJobMetric) => void)__ __addTrialJobMetricListener(listener: (metric: TrialJobMetric) => void)__
NNI will hold an EventEmitter to get job metrics, if there is new job metrics detected, the EventEmitter will be triggered. Users should start the EventEmitter in this function. NNI will hold an EventEmitter to get job metrics, if there is new job metrics detected, the EventEmitter will be triggered. Users should start the EventEmitter in this function.
__removeTrialJobMetricListener(listener: (metric: TrialJobMetric) => void)__ __removeTrialJobMetricListener(listener: (metric: TrialJobMetric) => void)__
Close the EventEmitter. Close the EventEmitter.
__run()__ __run()__
The run() function is a main loop function in TrainingService, users could set a while loop to execute their logic code, and finish executing them when the experiment is stopped. The run() function is a main loop function in TrainingService, users could set a while loop to execute their logic code, and finish executing them when the experiment is stopped.
__cleanUp()__ __cleanUp()__
This function is called to clean up the environment when a experiment is stopped. Users should do the platform-related cleaning operation in this function. This function is called to clean up the environment when a experiment is stopped. Users should do the platform-related cleaning operation in this function.
## TrialKeeper tool ## TrialKeeper tool
NNI offers a TrialKeeper tool to help maintaining trial jobs. Users can find the source code in `nni/tools/nni_trial_tool`. If users want to run trial jobs in cloud platform, this tool will be a fine choice to help keeping trial running in the platform. NNI offers a TrialKeeper tool to help maintaining trial jobs. Users can find the source code in `nni/tools/nni_trial_tool`. If users want to run trial jobs in cloud platform, this tool will be a fine choice to help keeping trial running in the platform.
The running architecture of TrialKeeper is show as follow: The running architecture of TrialKeeper is show as follow:
![](../img/trialkeeper.jpg) ![](../img/trialkeeper.jpg)
When users submit a trial job to cloud platform, they should wrap their trial command into TrialKeeper, and start a TrialKeeper process in cloud platform. Notice that TrialKeeper use restful server to communicate with TrainingService, users should start a restful server in local machine to receive metrics sent from TrialKeeper. The source code about restful server could be found in `nni/src/nni_manager/training_service/common/clusterJobRestServer.ts`. When users submit a trial job to cloud platform, they should wrap their trial command into TrialKeeper, and start a TrialKeeper process in cloud platform. Notice that TrialKeeper use restful server to communicate with TrainingService, users should start a restful server in local machine to receive metrics sent from TrialKeeper. The source code about restful server could be found in `nni/src/nni_manager/training_service/common/clusterJobRestServer.ts`.
## Reference ## Reference
For more information about how to debug, please [refer](HowToDebug.md). For more information about how to debug, please [refer](HowToDebug.md).
The guide line of how to contribute, please [refer](Contributing.md).
The guideline of how to contribute, please [refer](Contributing.md).
**Run an Experiment on Kubeflow** # Run an Experiment on Kubeflow
=== ===
Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/kubeflow), called kubeflow mode. Before starting to use NNI kubeflow mode, you should have a kubernetes cluster, either on-prem or [Azure Kubernetes Service(AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/), a Ubuntu machine on which [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) is setup to connect to your kubernetes cluster. If you are not familiar with kubernetes, [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/) is a good start. In kubeflow mode, your trial program will run as kubeflow job in kubernetes cluster.
Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/kubeflow), called kubeflow mode. Before starting to use NNI kubeflow mode, you should have a Kubernetes cluster, either on-premises or [Azure Kubernetes Service(AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/), a Ubuntu machine on which [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) is setup to connect to your Kubernetes cluster. If you are not familiar with Kubernetes, [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/) is a good start. In kubeflow mode, your trial program will run as Kubeflow job in Kubernetes cluster.
## Prerequisite for on-premises Kubernetes Service ## Prerequisite for on-premises Kubernetes Service
1. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this [guideline](https://kubernetes.io/docs/setup/) to set up Kubernetes 1. A **Kubernetes** cluster using Kubernetes 1.8 or later. Follow this [guideline](https://kubernetes.io/docs/setup/) to set up Kubernetes
2. Download, set up, and deploy **Kubelow** to your Kubernetes cluster. Follow this [guideline](https://www.kubeflow.org/docs/started/getting-started/) to set up Kubeflow 2. Download, set up, and deploy **Kubeflow** to your Kubernetes cluster. Follow this [guideline](https://www.kubeflow.org/docs/started/getting-started/) to setup Kubeflow.
3. Prepare a **kubeconfig** file, which will be used by NNI to interact with your kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the **KUBECONFIG** environment variable. Refer this [guideline]( https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig) to learn more about kubeconfig. 3. Prepare a **kubeconfig** file, which will be used by NNI to interact with your Kubernetes API server. By default, NNI manager will use $(HOME)/.kube/config as kubeconfig file's path. You can also specify other kubeconfig files by setting the **KUBECONFIG** environment variable. Refer this [guideline]( https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig) to learn more about kubeconfig.
4. If your NNI trial job needs GPU resource, you should follow this [guideline](https://github.com/NVIDIA/k8s-device-plugin) to configure **Nvidia device plugin for Kubernetes**. 4. If your NNI trial job needs GPU resource, you should follow this [guideline](https://github.com/NVIDIA/k8s-device-plugin) to configure **Nvidia device plugin for Kubernetes**.
5. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in `root_squash option`, otherwise permission issue may raise when NNI copy files to NFS. Refer this [page](https://linux.die.net/man/5/exports) to learn what root_squash option is), or **Azure File Storage**. 5. Prepare a **NFS server** and export a general purpose mount (we recommend to map your NFS server path in `root_squash option`, otherwise permission issue may raise when NNI copy files to NFS. Refer this [page](https://linux.die.net/man/5/exports) to learn what root_squash option is), or **Azure File Storage**.
6. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client: 6. Install **NFS client** on the machine where you install NNI and run nnictl to create experiment. Run this command to install NFSv4 client:
...@@ -16,37 +19,47 @@ Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/ku ...@@ -16,37 +19,47 @@ Now NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/ku
7. Install **NNI**, follow the install guide [here](QuickStart.md). 7. Install **NNI**, follow the install guide [here](QuickStart.md).
## Prerequisite for Azure Kubernetes Service ## Prerequisite for Azure Kubernetes Service
1. NNI support kubeflow based on Azure Kubernetes Service, follow the [guideline](https://azure.microsoft.com/en-us/services/kubernetes-service/) to set up Azure Kubernetes Service.
1. NNI support Kubeflow based on Azure Kubernetes Service, follow the [guideline](https://azure.microsoft.com/en-us/services/kubernetes-service/) to set up Azure Kubernetes Service.
2. Install [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and __kubectl__. Use `az login` to set azure account, and connect kubectl client to AKS, refer this [guideline](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster). 2. Install [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and __kubectl__. Use `az login` to set azure account, and connect kubectl client to AKS, refer this [guideline](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster).
3. Deploy kubeflow on Azure Kubernetes Service, follow the [guideline](https://www.kubeflow.org/docs/started/getting-started/). 3. Deploy Kubeflow on Azure Kubernetes Service, follow the [guideline](https://www.kubeflow.org/docs/started/getting-started/).
4. Follow the [guideline](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal) to create azure file storage account. If you use Azure Kubernetes Service, NNI need Azure Storage Service to store code files and the output files. 4. Follow the [guideline](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=portal) to create azure file storage account. If you use Azure Kubernetes Service, NNI need Azure Storage Service to store code files and the output files.
5. To access Azure storage service, NNI need the access key of the storage account, and NNI use [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) Service to protect your private key. Set up Azure Key Vault Service, add a secret to Key Vault to store the access key of Azure storage account. Follow this [guideline](https://docs.microsoft.com/en-us/azure/key-vault/quick-create-cli) to store the access key. 5. To access Azure storage service, NNI need the access key of the storage account, and NNI use [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) Service to protect your private key. Set up Azure Key Vault Service, add a secret to Key Vault to store the access key of Azure storage account. Follow this [guideline](https://docs.microsoft.com/en-us/azure/key-vault/quick-create-cli) to store the access key.
## Design ## Design
![](../img/kubeflow_training_design.png) ![](../img/kubeflow_training_design.png)
Kubeflow training service instantiates a kubernetes rest client to interact with your K8s cluster's API server. Kubeflow training service instantiates a Kubernetes rest client to interact with your K8s cluster's API server.
For each trial, we will upload all the files in your local codeDir path (configured in nni_config.yml) together with NNI generated files like parameter.cfg into a storage volumn. Right now we support two kinds of storage volumns: [nfs](https://en.wikipedia.org/wiki/Network_File_System) and [azure file storage](https://azure.microsoft.com/en-us/services/storage/files/), you should configure the storage volumn in NNI config YAML file. After files are prepared, Kubeflow training service will call K8S rest API to create kubeflow jobs ([tf-operator](https://github.com/kubeflow/tf-operator) job or [pytorch-operator](https://github.com/kubeflow/pytorch-operator) job) in K8S, and mount your storage volumn into the job's pod. Output files of kubeflow job, like stdout, stderr, trial.log or model files, will also be copied back to the storage volumn. NNI will show the storage volumn's URL for each trial in WebUI, to allow user browse the log files and job's output files. For each trial, we will upload all the files in your local codeDir path (configured in nni_config.yml) together with NNI generated files like parameter.cfg into a storage volumn. Right now we support two kinds of storage volumes: [nfs](https://en.wikipedia.org/wiki/Network_File_System) and [azure file storage](https://azure.microsoft.com/en-us/services/storage/files/), you should configure the storage volumn in NNI config YAML file. After files are prepared, Kubeflow training service will call K8S rest API to create Kubeflow jobs ([tf-operator](https://github.com/kubeflow/tf-operator) job or [pytorch-operator](https://github.com/kubeflow/pytorch-operator) job) in K8S, and mount your storage volume into the job's pod. Output files of Kubeflow job, like stdout, stderr, trial.log or model files, will also be copied back to the storage volumn. NNI will show the storage volumn's URL for each trial in WebUI, to allow user browse the log files and job's output files.
## Supported operator ## Supported operator
NNI only support tf-operator and pytorch-operator of kubeflow, other operators is not tested.
NNI only support tf-operator and pytorch-operator of Kubeflow, other operators is not tested.
Users could set operator type in config file. Users could set operator type in config file.
The setting of tf-operator: The setting of tf-operator:
```
```yaml
kubeflowConfig: kubeflowConfig:
operator: tf-operator operator: tf-operator
``` ```
The setting of pytorch-operator: The setting of pytorch-operator:
```
```yaml
kubeflowConfig: kubeflowConfig:
operator: pytorch-operator operator: pytorch-operator
``` ```
If users want to use tf-operator, he could set `ps` and `worker` in trial config. If users want to use pytorch-operator, he could set `master` and `worker` in trial config. If users want to use tf-operator, he could set `ps` and `worker` in trial config. If users want to use pytorch-operator, he could set `master` and `worker` in trial config.
## Supported storage type ## Supported storage type
NNI support NFS and Azure Storage to store the code and output files, users could set storage type in config file and set the corresponding config. NNI support NFS and Azure Storage to store the code and output files, users could set storage type in config file and set the corresponding config.
The setting for NFS storage are as follows: The setting for NFS storage are as follows:
```
```yaml
kubeflowConfig: kubeflowConfig:
storage: nfs storage: nfs
nfs: nfs:
...@@ -55,8 +68,10 @@ kubeflowConfig: ...@@ -55,8 +68,10 @@ kubeflowConfig:
# Your NFS server export path, like /var/nfs/nni # Your NFS server export path, like /var/nfs/nni
path: {your_nfs_server_export_path} path: {your_nfs_server_export_path}
``` ```
If you use Azure storage, you should set `kubeflowConfig` in your config YAML file as follows: If you use Azure storage, you should set `kubeflowConfig` in your config YAML file as follows:
```
```yaml
kubeflowConfig: kubeflowConfig:
storage: azureStorage storage: azureStorage
keyVault: keyVault:
...@@ -67,10 +82,11 @@ kubeflowConfig: ...@@ -67,10 +82,11 @@ kubeflowConfig:
azureShare: {your_azure_share_name} azureShare: {your_azure_share_name}
``` ```
## Run an experiment ## Run an experiment
Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of kubeflow. The NNI config YAML file's content is like:
``` Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of Kubeflow. The NNI config YAML file's content is like:
```yaml
authorName: default authorName: default
experimentName: example_mnist experimentName: example_mnist
trialConcurrency: 2 trialConcurrency: 2
...@@ -122,7 +138,8 @@ kubeflowConfig: ...@@ -122,7 +138,8 @@ kubeflowConfig:
Note: You should explicitly set `trainingServicePlatform: kubeflow` in NNI config YAML file if you want to start experiment in kubeflow mode. Note: You should explicitly set `trainingServicePlatform: kubeflow` in NNI config YAML file if you want to start experiment in kubeflow mode.
If you want to run PyTorch jobs, you could set your config files as follow: If you want to run PyTorch jobs, you could set your config files as follow:
```
```yaml
authorName: default authorName: default
experimentName: example_mnist_distributed_pytorch experimentName: example_mnist_distributed_pytorch
trialConcurrency: 1 trialConcurrency: 1
...@@ -166,37 +183,41 @@ kubeflowConfig: ...@@ -166,37 +183,41 @@ kubeflowConfig:
``` ```
Trial configuration in kubeflow mode have the following configuration keys: Trial configuration in kubeflow mode have the following configuration keys:
* codeDir * codeDir
* code directory, where you put training code and config files * code directory, where you put training code and config files
* worker (required). This config section is used to configure tensorflow worker role * worker (required). This config section is used to configure tensorflow worker role
* replicas * replicas
* Required key. Should be positive number depends on how many replication your want to run for tensorflow worker role. * Required key. Should be positive number depends on how many replication your want to run for tensorflow worker role.
* command * command
* Required key. Command to launch your trial job, like ```python mnist.py``` * Required key. Command to launch your trial job, like ```python mnist.py```
* memoryMB * memoryMB
* Required key. Should be positive number based on your trial program's memory requirement * Required key. Should be positive number based on your trial program's memory requirement
* cpuNum * cpuNum
* gpuNum * gpuNum
* image * image
* Required key. In kubeflow mode, your trial program will be scheduled by Kubernetes to run in [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/). This key is used to specify the Docker image used to create the pod where your trail program will run. * Required key. In kubeflow mode, your trial program will be scheduled by Kubernetes to run in [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/). This key is used to specify the Docker image used to create the pod where your trail program will run.
* We already build a docker image [msranni/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it. * We already build a docker image [msranni/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
* apiVersion * apiVersion
* Required key. The API version of your kubeflow. * Required key. The API version of your Kubeflow.
* ps (optional). This config section is used to configure tensorflow parameter server role. * ps (optional). This config section is used to configure Tensorflow parameter server role.
* master(optional). This config section is used to configure pytorch parameter server role. * master(optional). This config section is used to configure PyTorch parameter server role.
Once complete to fill NNI experiment config file and save (for example, save as exp_kubeflow.yml), then run the following command Once complete to fill NNI experiment config file and save (for example, save as exp_kubeflow.yml), then run the following command
```
```bash
nnictl create --config exp_kubeflow.yml nnictl create --config exp_kubeflow.yml
``` ```
to start the experiment in kubeflow mode. NNI will create Kubeflow tfjob or pytorchjob for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`. to start the experiment in kubeflow mode. NNI will create Kubeflow tfjob or pytorchjob for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
You can see the kubeflow tfjob created by NNI in your Kubernetes dashboard. You can see the Kubeflow tfjob created by NNI in your Kubernetes dashboard.
Notice: In kubeflow mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic. Notice: In kubeflow mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic.
Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information. Once a trial job is completed, you can go to NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
## version check ## version check
NNI support version check feature in since version 0.6, [refer](PaiMode.md) NNI support version check feature in since version 0.6, [refer](PaiMode.md)
Any problems when using NNI in kubeflow mode, please create issues on [NNI Github repo](https://github.com/Microsoft/nni). Any problems when using NNI in Kubeflow mode, please create issues on [NNI Github repo](https://github.com/Microsoft/nni).
...@@ -15,6 +15,7 @@ nnictl support commands: ...@@ -15,6 +15,7 @@ nnictl support commands:
* [nnictl trial](#trial) * [nnictl trial](#trial)
* [nnictl top](#top) * [nnictl top](#top)
* [nnictl experiment](#experiment) * [nnictl experiment](#experiment)
* [nnictl platform](#platform)
* [nnictl config](#config) * [nnictl config](#config)
* [nnictl log](#log) * [nnictl log](#log)
* [nnictl webui](#webui) * [nnictl webui](#webui)
...@@ -367,8 +368,35 @@ Debug mode will disable version check function in Trialkeeper. ...@@ -367,8 +368,35 @@ Debug mode will disable version check function in Trialkeeper.
* Usage * Usage
```bash ```bash
nnictl experiment list nnictl experiment list [OPTIONS]
``` ```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|--all| False| |list all of experiments|
* __nnictl experiment delete__
* Description
Delete one or all experiments, it includes log, result, environment information and cache. It uses to delete useless experiment result, or save disk space.
* Usage
```bash
nnictl experiment delete [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|id| False| |ID of the experiment|
|--all| False| |delete all of experiments|
<a name="export"></a> <a name="export"></a>
...@@ -456,6 +484,32 @@ Debug mode will disable version check function in Trialkeeper. ...@@ -456,6 +484,32 @@ Debug mode will disable version check function in Trialkeeper.
nnictl experiment import [experiment_id] -f experiment_data.json nnictl experiment import [experiment_id] -f experiment_data.json
``` ```
<a name="platform"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `Manage platform information`
* __nnictl platform clean__
* Description
It uses to clean up disk on a target platform. The provided YAML file includes the information of target platform, and it follows the same schema as the NNI configuration file.
* Note
if the target platform is being used by other users, it may cause unexpected errors to others.
* Usage
```bash
nnictl platform clean [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|--config| True| |the path of yaml config file used when create an experiment|
<a name="config"></a> <a name="config"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `nnictl config show` ![](https://placehold.it/15/1589F0/000000?text=+) `nnictl config show`
......
# ChangeLog # ChangeLog
# Release 0.8 - 6/4/2019 ## Release 0.9 - 7/1/2019
## Major Features
* [Support NNI on Windows for PAI/Remote mode] ### Major Features
* NNI running on windows for remote mode * General NAS programming interface
* NNI running on windows for PAI mode * Add `enas-mode` and `oneshot-mode` for NAS interface: [PR #1201](https://github.com/microsoft/nni/pull/1201#issue-291094510)
* [Advanced features for using GPU] * [Gaussian Process Tuner with Matern kernel](./GPTuner.md)
* Run multiple trial jobs on the same GPU for local and remote mode
* Run trial jobs on the GPU running non-NNI jobs * Multiphase experiment supports
* [Kubeflow v1beta2 operator] * Added new training service support for multiphase experiment: PAI mode supports multiphase experiment since v0.9.
* Support Kubeflow TFJob/PyTorchJob v1beta2 * Added multiphase capability for the following builtin tuners:
* TPE, Random Search, Anneal, Naïve Evolution, SMAC, Network Morphism, Metis Tuner.
For details, please refer to [Write a tuner that leverages multi-phase](./MultiPhase.md#write-a-tuner-that-leverages-multi-phase)
* Web Portal
* Enable trial comparation in Web Portal. For details, refer to [View trials status](WebUI.md#view-trials-status)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](WebUI.md#view-summary-page)
* show intermediate results more friendly. For details, refer to [View trials status](WebUI.md#view-trials-status)
* [Commandline Interface](Nnictl.md)
* `nnictl experiment delete`: delete one or all experiments, it includes log, result, environment information and cache. It uses to delete useless experiment result, or save disk space.
* `nnictl platform clean`: It uses to clean up disk on a target platform. The provided YAML file includes the information of target platform, and it follows the same schema as the NNI configuration file.
### Bug fix and other changes
* Tuner Installation Improvements: add [sklearn](https://scikit-learn.org/stable/) to nni dependencies.
* (Bug Fix) Failed to connect to PAI http code - [Issue #1076](https://github.com/microsoft/nni/issues/1076)
* (Bug Fix) Validate file name for PAI platform - [Issue #1164](https://github.com/microsoft/nni/issues/1164)
* (Bug Fix) Update GMM evaluation in Metis Tuner
* (Bug Fix) Negative time number rendering in Web Portal - [Issue #1182](https://github.com/microsoft/nni/issues/1182), [Issue #1185](https://github.com/microsoft/nni/issues/1185)
* (Bug Fix) Hyper-parameter not shown correctly in WebUI when there is only one hyper parameter - [Issue #1192](https://github.com/microsoft/nni/issues/1192)
## Release 0.8 - 6/4/2019
### Major Features
* Support NNI on Windows for OpenPAI/Remote mode
* NNI running on windows for remote mode
* NNI running on windows for OpenPAI mode
* Advanced features for using GPU
* Run multiple trial jobs on the same GPU for local and remote mode
* Run trial jobs on the GPU running non-NNI jobs
* Kubeflow v1beta2 operator
* Support Kubeflow TFJob/PyTorchJob v1beta2
* [General NAS programming interface](./GeneralNasInterfaces.md) * [General NAS programming interface](./GeneralNasInterfaces.md)
* Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation * Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation
* Provide a new command `nnictl trial codegen` for debugging the NAS code * Provide a new command `nnictl trial codegen` for debugging the NAS code
* Tutorial of NAS programming interface, example of NAS on mnist, customized random tuner for NAS * Tutorial of NAS programming interface, example of NAS on MNIST, customized random tuner for NAS
* [Support resume tuner/advisor's state for experiment resume] * Support resume tuner/advisor's state for experiment resume
* For experiment resume, tuner/advisor will be resumed by replaying finished trial data * For experiment resume, tuner/advisor will be resumed by replaying finished trial data
* [Web Portal] * Web Portal
* Improve the design of copying trial's parameters * Improve the design of copying trial's parameters
* Support 'randint' type in hyper-parameter graph * Support 'randint' type in hyper-parameter graph
* Use should ComponentUpdate to avoid unnecessary render * Use should ComponentUpdate to avoid unnecessary render
## Bug fix and other changes
* [Bug fix that `nnictl update` has inconsistent command styles] ### Bug fix and other changes
* [Support import data for SMAC tuner]
* [Bug fix that experiment state transition from ERROR back to RUNNING] * Bug fix that `nnictl update` has inconsistent command styles
* [Fix bug of table entries] * Support import data for SMAC tuner
* [Nested search space refinement] * Bug fix that experiment state transition from ERROR back to RUNNING
* [Refine 'randint' type and support lower bound] * Fix bug of table entries
* Nested search space refinement
* Refine 'randint' type and support lower bound
* [Comparison of different hyper-parameter tuning algorithm](./CommunitySharings/HpoComparision.md) * [Comparison of different hyper-parameter tuning algorithm](./CommunitySharings/HpoComparision.md)
* [Comparison of NAS algorithm](./CommunitySharings/NasComparision.md) * [Comparison of NAS algorithm](./CommunitySharings/NasComparision.md)
* [NNI practice on Recommenders](./CommunitySharings/NniPracticeSharing/RecommendersSvd.md) * [NNI practice on Recommenders](./CommunitySharings/NniPracticeSharing/RecommendersSvd.md)
...@@ -56,7 +89,7 @@ ...@@ -56,7 +89,7 @@
* Unable to kill all python threads after nnictl stop in async dispatcher mode * Unable to kill all python threads after nnictl stop in async dispatcher mode
* nnictl --version does not work with make dev-install * nnictl --version does not work with make dev-install
* All trail jobs status stays on 'waiting' for long time on PAI platform * All trail jobs status stays on 'waiting' for long time on OpenPAI platform
## Release 0.6 - 4/2/2019 ## Release 0.6 - 4/2/2019
...@@ -73,7 +106,7 @@ ...@@ -73,7 +106,7 @@
### Bug fix ### Bug fix
* [Add shmMB config key for PAI](https://github.com/Microsoft/nni/issues/842) * [Add shmMB config key for OpenPAI](https://github.com/Microsoft/nni/issues/842)
* Fix the bug that doesn't show any result if metrics is dict * Fix the bug that doesn't show any result if metrics is dict
* Fix the number calculation issue for float types in hyperband * Fix the number calculation issue for float types in hyperband
* Fix a bug in the search space conversion in SMAC tuner * Fix a bug in the search space conversion in SMAC tuner
......
...@@ -85,6 +85,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -85,6 +85,7 @@ All types of sampling strategies and their parameter are listed here:
| Grid Search Tuner | &#10003; | | | &#10003; | | &#10003; | | | | | | Grid Search Tuner | &#10003; | | | &#10003; | | &#10003; | | | | |
| Hyperband Advisor | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | | Hyperband Advisor | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| Metis Tuner | &#10003; | &#10003; | &#10003; | &#10003; | | | | | | | | Metis Tuner | &#10003; | &#10003; | &#10003; | &#10003; | | | | | | |
| GP Tuner | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | | | | |
Known Limitations: Known Limitations:
......
# Scikit-learn in NNI # Scikit-learn in NNI
[Scikit-learn](https://github.com/scikit-learn/scikit-learn) is a pupular meachine learning tool for data mining and data analysis. It supports many kinds of meachine learning models like LinearRegression, LogisticRegression, DecisionTree, SVM etc. How to make the use of scikit-learn more efficiency is a valuable topic. [Scikit-learn](https://github.com/scikit-learn/scikit-learn) is a popular machine learning tool for data mining and data analysis. It supports many kinds of machine learning models like LinearRegression, LogisticRegression, DecisionTree, SVM etc. How to make the use of scikit-learn more efficiency is a valuable topic.
NNI supports many kinds of tuning algorithms to search the best models and/or hyper-parameters for scikit-learn, and support many kinds of environments like local machine, remote servers and cloud. NNI supports many kinds of tuning algorithms to search the best models and/or hyper-parameters for scikit-learn, and support many kinds of environments like local machine, remote servers and cloud.
## 1. How to run the example ## 1. How to run the example
To start using NNI, you should install the nni package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please [refer](QuickStart.md). To start using NNI, you should install the NNI package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please refer [here](QuickStart.md).
After you installed NNI, you could enter the corresponding folder and start the experiment using following commands: After you installed NNI, you could enter the corresponding folder and start the experiment using following commands:
```bash ```bash
...@@ -17,16 +19,18 @@ nnictl create --config ./config.yml ...@@ -17,16 +19,18 @@ nnictl create --config ./config.yml
### 2.1 classification ### 2.1 classification
This example uses the dataset of digits, which is made up of 1797 8x8 images, and each image is a hand-written digit, the goal is to classify these images into 10 classes. This example uses the dataset of digits, which is made up of 1797 8x8 images, and each image is a hand-written digit, the goal is to classify these images into 10 classes.
In this example, we use SVC as the model, and choose some parameters of this model, including `"C", "keral", "degree", "gamma" and "coef0"`. For more information of these parameters, please [refer](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html). In this example, we use SVC as the model, and choose some parameters of this model, including `"C", "keral", "degree", "gamma" and "coef0"`. For more information of these parameters, please [refer](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html).
### 2.2 regression ### 2.2 regression
This example uses the Boston Housing Dataset, this dataset consists of price of houses in various places in Boston and the information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE) etc to predict the house price of boston. This example uses the Boston Housing Dataset, this dataset consists of price of houses in various places in Boston and the information such as Crime (CRIM), areas of non-retail business in the town (INDUS), the age of people who own the house (AGE) etc., to predict the house price of Boston.
In this example, we tune different kinds of regression models including `"LinearRegression", "SVR", "KNeighborsRegressor", "DecisionTreeRegressor"` and some parameters like `"svr_kernel", "knr_weights"`. You could get more details about these models from [here](https://scikit-learn.org/stable/supervised_learning.html#supervised-learning). In this example, we tune different kinds of regression models including `"LinearRegression", "SVR", "KNeighborsRegressor", "DecisionTreeRegressor"` and some parameters like `"svr_kernel", "knr_weights"`. You could get more details about these models from [here](https://scikit-learn.org/stable/supervised_learning.html#supervised-learning).
## 3. How to write sklearn code using nni ## 3. How to write scikit-learn code using NNI
It is easy to use nni in your sklearn code, there are only a few steps. It is easy to use NNI in your scikit-learn code, there are only a few steps.
* __step 1__ * __step 1__
...@@ -51,8 +55,10 @@ It is easy to use nni in your sklearn code, there are only a few steps. ...@@ -51,8 +55,10 @@ It is easy to use nni in your sklearn code, there are only a few steps.
Then you could read these values as a dict from your python code, please get into the step 2. Then you could read these values as a dict from your python code, please get into the step 2.
* __step 2__ * __step 2__
At the beginning of your python code, you should `import nni` to insure the packages works normally. At the beginning of your python code, you should `import nni` to insure the packages works normally.
First, you should use `nni.get_next_parameter()` function to get your parameters given by nni. Then you could use these parameters to update your code.
First, you should use `nni.get_next_parameter()` function to get your parameters given by NNI. Then you could use these parameters to update your code.
For example, if you define your search_space.json like following format: For example, if you define your search_space.json like following format:
```json ```json
...@@ -79,5 +85,7 @@ It is easy to use nni in your sklearn code, there are only a few steps. ...@@ -79,5 +85,7 @@ It is easy to use nni in your sklearn code, there are only a few steps.
Then you could use these variables to write your scikit-learn code. Then you could use these variables to write your scikit-learn code.
* __step 3__ * __step 3__
After you finished your training, you could get your own score of the model, like your percision, recall or MSE etc. NNI needs your score to tuner algorithms and generate next group of parameters, please report the score back to NNI and start next trial job.
You just need to use `nni.report_final_result(score)` to communitate with NNI after you process your scikit-learn code. Or if you have multiple scores in the steps of training, you could also report them back to NNI using `nni.report_intemediate_result(score)`. Note, you may not report intemediate result of your job, but you must report back your final result. After you finished your training, you could get your own score of the model, like your precision, recall or MSE etc. NNI needs your score to tuner algorithms and generate next group of parameters, please report the score back to NNI and start next trial job.
You just need to use `nni.report_final_result(score)` to communicate with NNI after you process your scikit-learn code. Or if you have multiple scores in the steps of training, you could also report them back to NNI using `nni.report_intemediate_result(score)`. Note, you may not report intermediate result of your job, but you must report back your final result.
...@@ -33,7 +33,9 @@ Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search s ...@@ -33,7 +33,9 @@ Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search s
```python ```python
RECEIVED_PARAMS = nni.get_next_parameter() RECEIVED_PARAMS = nni.get_next_parameter()
``` ```
`RECEIVED_PARAMS` is an object, for example: `RECEIVED_PARAMS` is an object, for example:
`{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}`. `{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}`.
- Report metric data periodically (optional) - Report metric data periodically (optional)
...@@ -41,6 +43,7 @@ RECEIVED_PARAMS = nni.get_next_parameter() ...@@ -41,6 +43,7 @@ RECEIVED_PARAMS = nni.get_next_parameter()
```python ```python
nni.report_intermediate_result(metrics) nni.report_intermediate_result(metrics)
``` ```
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy. `metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
- Report performance of the configuration - Report performance of the configuration
...@@ -63,7 +66,6 @@ You can refer to [here](ExperimentConfig.md) for more information about how to s ...@@ -63,7 +66,6 @@ You can refer to [here](ExperimentConfig.md) for more information about how to s
*Please refer to [here](https://nni.readthedocs.io/en/latest/sdk_reference.html) for more APIs (e.g., `nni.get_sequence_id()`) provided by NNI. *Please refer to [here](https://nni.readthedocs.io/en/latest/sdk_reference.html) for more APIs (e.g., `nni.get_sequence_id()`) provided by NNI.
<a name="nni-annotation"></a> <a name="nni-annotation"></a>
## NNI Python Annotation ## NNI Python Annotation
...@@ -125,7 +127,6 @@ In the YAML configure file, you need to set *useAnnotation* to true to enable NN ...@@ -125,7 +127,6 @@ In the YAML configure file, you need to set *useAnnotation* to true to enable NN
useAnnotation: true useAnnotation: true
``` ```
## Where are my trials? ## Where are my trials?
### Local Mode ### Local Mode
...@@ -133,7 +134,8 @@ useAnnotation: true ...@@ -133,7 +134,8 @@ useAnnotation: true
In NNI, every trial has a dedicated directory for them to output their own data. In each trial, an environment variable called `NNI_OUTPUT_DIR` is exported. Under this directory, you could find each trial's code, data and other possible log. In addition, each trial's log (including stdout) will be re-directed to a file named `trial.log` under that directory. In NNI, every trial has a dedicated directory for them to output their own data. In each trial, an environment variable called `NNI_OUTPUT_DIR` is exported. Under this directory, you could find each trial's code, data and other possible log. In addition, each trial's log (including stdout) will be re-directed to a file named `trial.log` under that directory.
If NNI Annotation is used, trial's converted code is in another temporary directory. You can check that in a file named `run.sh` under the directory indicated by `NNI_OUTPUT_DIR`. The second line (i.e., the `cd` command) of this file will change directory to the actual directory where code is located. Below is an example of `run.sh`: If NNI Annotation is used, trial's converted code is in another temporary directory. You can check that in a file named `run.sh` under the directory indicated by `NNI_OUTPUT_DIR`. The second line (i.e., the `cd` command) of this file will change directory to the actual directory where code is located. Below is an example of `run.sh`:
```shell
```bash
#!/bin/bash #!/bin/bash
cd /tmp/user_name/nni/annotation/tmpzj0h72x6 #This is the actual directory cd /tmp/user_name/nni/annotation/tmpzj0h72x6 #This is the actual directory
export NNI_PLATFORM=local export NNI_PLATFORM=local
...@@ -149,7 +151,7 @@ echo $? `date +%s%3N` >/home/user_name/nni/experiments/$experiment_id$/trials/$t ...@@ -149,7 +151,7 @@ echo $? `date +%s%3N` >/home/user_name/nni/experiments/$experiment_id$/trials/$t
### Other Modes ### Other Modes
When runing trials on other platform like remote machine or PAI, the environment variable `NNI_OUTPUT_DIR` only refers to the output directory of the trial, while trial code and `run.sh` might not be there. However, the `trial.log` will be transmitted back to local machine in trial's directory, which defaults to `~/nni/experiments/$experiment_id$/trials/$trial_id$/` When running trials on other platform like remote machine or PAI, the environment variable `NNI_OUTPUT_DIR` only refers to the output directory of the trial, while trial code and `run.sh` might not be there. However, the `trial.log` will be transmitted back to local machine in trial's directory, which defaults to `~/nni/experiments/$experiment_id$/trials/$trial_id$/`
For more information, please refer to [HowToDebug](HowToDebug.md) For more information, please refer to [HowToDebug](HowToDebug.md)
......
...@@ -96,7 +96,7 @@ html_theme_options = { ...@@ -96,7 +96,7 @@ html_theme_options = {
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static'] html_static_path = ['../static']
# Custom sidebar templates, must be a dictionary that maps document names # Custom sidebar templates, must be a dictionary that maps document names
# to template names. # to template names.
...@@ -191,4 +191,5 @@ def setup(app): ...@@ -191,4 +191,5 @@ def setup(app):
'enable_eval_rst': True, 'enable_eval_rst': True,
'enable_auto_toc_tree': False, 'enable_auto_toc_tree': False,
}, True) }, True)
app.add_transform(AutoStructify) app.add_transform(AutoStructify)
\ No newline at end of file app.add_stylesheet('css/custom.css')
docs/img/webui-img/addColumn.png

35.7 KB | W: | H:

docs/img/webui-img/addColumn.png

42 KB | W: | H:

docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
  • 2-up
  • Swipe
  • Onion skin
docs/img/webui-img/compare.png

61.7 KB | W: | H:

docs/img/webui-img/compare.png

49.9 KB | W: | H:

docs/img/webui-img/compare.png
docs/img/webui-img/compare.png
docs/img/webui-img/compare.png
docs/img/webui-img/compare.png
  • 2-up
  • Swipe
  • Onion skin
docs/img/webui-img/detail-local.png

37.2 KB | W: | H:

docs/img/webui-img/detail-local.png

21.7 KB | W: | H:

docs/img/webui-img/detail-local.png
docs/img/webui-img/detail-local.png
docs/img/webui-img/detail-local.png
docs/img/webui-img/detail-local.png
  • 2-up
  • Swipe
  • Onion skin
.wy-table-responsive table td, .wy-table-responsive table th{
white-space:normal
}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment