"docs/git@developer.sourcefind.cn:OpenDAS/nni.git" did not exist on "40bae6e21be6f4ef110be61023af1acb4e4bf9fd"
Unverified Commit ccb2211e authored by chicm-ms's avatar chicm-ms Committed by GitHub
Browse files

Merge pull request #17 from microsoft/master

pull code
parents 58fd0c84 31dc58e9
...@@ -52,32 +52,32 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search ...@@ -52,32 +52,32 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
</ul> </ul>
</td> </td>
<td> <td>
<a href="docs/en_US/Builtin_Tuner.md">Tuner</a> <a href="docs/en_US/BuiltinTuner.md">Tuner</a>
<ul> <ul>
<li><a href="docs/en_US/Builtin_Tuner.md#TPE">TPE</a></li> <li><a href="docs/en_US/BuiltinTuner.md#TPE">TPE</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#Random">Random Search</a></li> <li><a href="docs/en_US/BuiltinTuner.md#Random">Random Search</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#Anneal">Anneal</a></li> <li><a href="docs/en_US/BuiltinTuner.md#Anneal">Anneal</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#Evolution">Naive Evolution</a></li> <li><a href="docs/en_US/BuiltinTuner.md#Evolution">Naive Evolution</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#SMAC">SMAC</a></li> <li><a href="docs/en_US/BuiltinTuner.md#SMAC">SMAC</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#Batch">Batch</a></li> <li><a href="docs/en_US/BuiltinTuner.md#Batch">Batch</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#Grid">Grid Search</a></li> <li><a href="docs/en_US/BuiltinTuner.md#Grid">Grid Search</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#Hyperband">Hyperband</a></li> <li><a href="docs/en_US/BuiltinTuner.md#Hyperband">Hyperband</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#NetworkMorphism">Network Morphism</a></li> <li><a href="docs/en_US/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li>
<li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li> <li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#NetworkMorphism#MetisTuner">Metis Tuner</a></li> <li><a href="docs/en_US/BuiltinTuner.md#NetworkMorphism#MetisTuner">Metis Tuner</a></li>
<li><a href="docs/en_US/Builtin_Tuner.md#BOHB">BOHB</a></li> <li><a href="docs/en_US/BuiltinTuner.md#BOHB">BOHB</a></li>
</ul> </ul>
<a href="docs/en_US/Builtin_Assessors.md#assessor">Assessor</a> <a href="docs/en_US/BuiltinAssessors.md#assessor">Assessor</a>
<ul> <ul>
<li><a href="docs/en_US/Builtin_Assessors.md#Medianstop">Median Stop</a></li> <li><a href="docs/en_US/BuiltinAssessors.md#Medianstop">Median Stop</a></li>
<li><a href="docs/en_US/Builtin_Assessors.md#Curvefitting">Curve Fitting</a></li> <li><a href="docs/en_US/BuiltinAssessors.md#Curvefitting">Curve Fitting</a></li>
</ul> </ul>
</td> </td>
<td> <td>
<ul> <ul>
<li><a href="docs/en_US/LocalMode.md">Local Machine</a></li> <li><a href="docs/en_US/LocalMode.md">Local Machine</a></li>
<li><a href="docs/en_US/RemoteMachineMode.md">Remote Servers</a></li> <li><a href="docs/en_US/RemoteMachineMode.md">Remote Servers</a></li>
<li><a href="docs/en_US/PAIMode.md">OpenPAI</a></li> <li><a href="docs/en_US/PaiMode.md">OpenPAI</a></li>
<li><a href="docs/en_US/KubeflowMode.md">Kubeflow</a></li> <li><a href="docs/en_US/KubeflowMode.md">Kubeflow</a></li>
<li><a href="docs/en_US/FrameworkControllerMode.md">FrameworkController on K8S (AKS etc.)</a></li> <li><a href="docs/en_US/FrameworkControllerMode.md">FrameworkController on K8S (AKS etc.)</a></li>
</ul> </ul>
...@@ -129,7 +129,7 @@ python -m pip install --upgrade nni ...@@ -129,7 +129,7 @@ python -m pip install --upgrade nni
Note: Note:
* `--user` can be added if you want to install NNI in your home directory, which does not require any special privileges. * `--user` can be added if you want to install NNI in your home directory, which does not require any special privileges.
* Currently NNI on Windows only support local mode. Anaconda is highly recommended to install NNI on Windows. * Currently NNI on Windows only support local mode. Anaconda or Miniconda is highly recommended to install NNI on Windows.
* If there is any error like `Segmentation fault`, please refer to [FAQ](docs/en_US/FAQ.md) * If there is any error like `Segmentation fault`, please refer to [FAQ](docs/en_US/FAQ.md)
**Install through source code** **Install through source code**
...@@ -229,11 +229,11 @@ You can use these commands to get more information about the experiment ...@@ -229,11 +229,11 @@ You can use these commands to get more information about the experiment
## **How to** ## **How to**
* [Install NNI](docs/en_US/Installation.md) * [Install NNI](docs/en_US/Installation.md)
* [Use command line tool nnictl](docs/en_US/NNICTLDOC.md) * [Use command line tool nnictl](docs/en_US/Nnictl.md)
* [Use NNIBoard](docs/en_US/WebUI.md) * [Use NNIBoard](docs/en_US/WebUI.md)
* [How to define search space](docs/en_US/SearchSpaceSpec.md) * [How to define search space](docs/en_US/SearchSpaceSpec.md)
* [How to define a trial](docs/en_US/Trials.md) * [How to define a trial](docs/en_US/Trials.md)
* [How to choose tuner/search-algorithm](docs/en_US/Builtin_Tuner.md) * [How to choose tuner/search-algorithm](docs/en_US/BuiltinTuner.md)
* [Config an experiment](docs/en_US/ExperimentConfig.md) * [Config an experiment](docs/en_US/ExperimentConfig.md)
* [How to use annotation](docs/en_US/Trials.md#nni-python-annotation) * [How to use annotation](docs/en_US/Trials.md#nni-python-annotation)
...@@ -241,12 +241,12 @@ You can use these commands to get more information about the experiment ...@@ -241,12 +241,12 @@ You can use these commands to get more information about the experiment
* [Run an experiment on local (with multiple GPUs)?](docs/en_US/LocalMode.md) * [Run an experiment on local (with multiple GPUs)?](docs/en_US/LocalMode.md)
* [Run an experiment on multiple machines?](docs/en_US/RemoteMachineMode.md) * [Run an experiment on multiple machines?](docs/en_US/RemoteMachineMode.md)
* [Run an experiment on OpenPAI?](docs/en_US/PAIMode.md) * [Run an experiment on OpenPAI?](docs/en_US/PaiMode.md)
* [Run an experiment on Kubeflow?](docs/en_US/KubeflowMode.md) * [Run an experiment on Kubeflow?](docs/en_US/KubeflowMode.md)
* [Try different tuners](docs/en_US/tuners.rst) * [Try different tuners](docs/en_US/tuners.rst)
* [Try different assessors](docs/en_US/assessors.rst) * [Try different assessors](docs/en_US/assessors.rst)
* [Implement a customized tuner](docs/en_US/Customize_Tuner.md) * [Implement a customized tuner](docs/en_US/CustomizeTuner.md)
* [Implement a customized assessor](docs/en_US/Customize_Assessor.md) * [Implement a customized assessor](docs/en_US/CustomizeAssessor.md)
* [Use Genetic Algorithm to find good model architectures for Reading Comprehension task](examples/trials/ga_squad/README.md) * [Use Genetic Algorithm to find good model architectures for Reading Comprehension task](examples/trials/ga_squad/README.md)
## **Contribute** ## **Contribute**
...@@ -255,9 +255,9 @@ This project welcomes contributions and suggestions, we use [GitHub issues](http ...@@ -255,9 +255,9 @@ This project welcomes contributions and suggestions, we use [GitHub issues](http
Issues with the **good first issue** label are simple and easy-to-start ones that we recommend new contributors to start with. Issues with the **good first issue** label are simple and easy-to-start ones that we recommend new contributors to start with.
To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/en_US/SetupNNIDeveloperEnvironment.md) To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/en_US/SetupNniDeveloperEnvironment.md)
Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/en_US/CONTRIBUTING.md) Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/en_US/Contributing.md)
We are in construction of the instruction for [How to Debug](docs/en_US/HowToDebug.md), you are also welcome to contribute questions or suggestions on this area. We are in construction of the instruction for [How to Debug](docs/en_US/HowToDebug.md), you are also welcome to contribute questions or suggestions on this area.
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。 NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。
### **NNI [v0.6](https://github.com/Microsoft/nni/releases) 已发布!** ### **NNI [v0.7](https://github.com/Microsoft/nni/releases) 已发布!**
<p align="center"> <p align="center">
<a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a> <a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a>
...@@ -98,78 +98,118 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -98,78 +98,118 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
## **安装和验证** ## **安装和验证**
在 Windows 本机模式下,并且是第一次使用 PowerShell 来运行脚本,需要**使用管理员权限**运行一次下列命令:
```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
```
**通过 pip 命令安装** **通过 pip 命令安装**
* 当前支持 Linux 和 MacOS。测试并支持的版本包括:Ubuntu 16.04 及更高版本,MacOS 10.14.1。 在 `python >= 3.5` 的环境中,只需要运行 `pip install` 即可完成安装。 * 当前支持 Linux,MacOS 和 Windows(本机模式),在 Ubuntu 16.04 或更高版本,MacOS 10.14.1 以及 Windows 10.1809 上进行了测试。 在 `python >= 3.5` 的环境中,只需要运行 `pip install` 即可完成安装。
Linux 和 MacOS
```bash ```bash
python3 -m pip install --upgrade nni python3 -m pip install --upgrade nni
```
Windows
```bash
python -m pip install --upgrade nni
``` ```
注意: 注意:
* 如果需要将 NNI 安装到自己的 home 目录中,可使用 `--user`,这样也不需要任何特殊权限。 * 如果需要将 NNI 安装到自己的 home 目录中,可使用 `--user`,这样也不需要任何特殊权限。
* 当前 NNI 在 Windows 上仅支持本机模式。 强烈推荐使用 Anaconda 在 Windows 上安装 NNI。
* 如果遇到如`Segmentation fault` 这样的任何错误请参考[常见问题](docs/zh_CN/FAQ.md) * 如果遇到如`Segmentation fault` 这样的任何错误请参考[常见问题](docs/zh_CN/FAQ.md)
**通过源代码安装** **通过源代码安装**
* 当前支持 Linux(Ubuntu 16.04 及更高版本) 和 MacOS(10.14.1)。 * 当前支持 Linux(Ubuntu 16.04 或更高版本),MacOS(10.14.1)以及 Windows 10(1809 版)下的本机模式。
Linux 和 MacOS
*`python >= 3.5` 的环境中运行命令: `git``wget`,确保安装了这两个组件。 *`python >= 3.5` 的环境中运行命令: `git``wget`,确保安装了这两个组件。
```bash ```bash
git clone -b v0.6 https://github.com/Microsoft/nni.git git clone -b v0.7 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
```
Windows
*`python >=3.5` 的环境中运行命令: `git``PowerShell`,确保安装了这两个组件。
```bash
git clone -b v0.7 https://github.com/Microsoft/nni.git
cd nni
powershell ./install.ps1
``` ```
参考[安装 NNI](docs/zh_CN/Installation.md) 了解系统需求。 参考[安装 NNI](docs/zh_CN/Installation.md) 了解系统需求。
参考 [NNI Windows 本机模式](docs/zh_CN/WindowsLocalMode.md),了解更多信息。
**验证安装** **验证安装**
以下示例 Experiment 依赖于 TensorFlow 。 在运行前确保安装了 **TensorFlow** 以下示例 Experiment 依赖于 TensorFlow 。 在运行前确保安装了 **TensorFlow**
* 通过克隆源代码下载示例。 * 通过克隆源代码下载示例。
```bash ```bash
git clone -b v0.6 https://github.com/Microsoft/nni.git git clone -b v0.7 https://github.com/Microsoft/nni.git
``` ```
* 运行 mnist 示例。 Linux 和 macOS
* 运行 MNIST 示例。
```bash ```bash
nnictl create --config nni/examples/trials/mnist/config.yml nnictl create --config nni/examples/trials/mnist/config.yml
``` ```
Windows
* 运行 MNIST 示例。
```bash
nnictl create --config nni/examples/trials/mnist/config_windows.yml
```
* 在命令行中等待输出 `INFO: Successfully started experiment!`。 此消息表明 Experiment 已成功启动。 通过命令行输出的 `Web UI url` 来访问 Experiment 的界面。 * 在命令行中等待输出 `INFO: Successfully started experiment!`。 此消息表明 Experiment 已成功启动。 通过命令行输出的 `Web UI url` 来访问 Experiment 的界面。
``` ```text
INFO: Starting restful server... INFO: Starting restful server...
INFO: Successfully started Restful server! INFO: Successfully started Restful server!
INFO: Setting local config... INFO: Setting local config...
INFO: Successfully set local config! INFO: Successfully set local config!
INFO: Starting experiment... INFO: Starting experiment...
INFO: Successfully started experiment! INFO: Successfully started experiment!
----------------------------------------------------------------------- -----------------------------------------------------------------------
The experiment id is egchD4qy The experiment id is egchD4qy
The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080 The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080
----------------------------------------------------------------------- -----------------------------------------------------------------------
You can use these commands to get more information about the experiment You can use these commands to get more information about the experiment
----------------------------------------------------------------------- -----------------------------------------------------------------------
commands description commands description
1. nnictl experiment show show the information of experiments 1. nnictl experiment show show the information of experiments
2. nnictl trial ls list all of trial jobs 2. nnictl trial ls list all of trial jobs
3. nnictl top monitor the status of running experiments 3. nnictl top monitor the status of running experiments
4. nnictl log stderr show stderr log content 4. nnictl log stderr show stderr log content
5. nnictl log stdout show stdout log content 5. nnictl log stdout show stdout log content
6. nnictl stop stop an experiment 6. nnictl stop stop an experiment
7. nnictl trial kill kill a trial job by id 7. nnictl trial kill kill a trial job by id
8. nnictl --help get help information about nnictl 8. nnictl --help get help information about nnictl
----------------------------------------------------------------------- -----------------------------------------------------------------------
```
* 在浏览器中打开 `Web UI url`,可看到下图的 Experiment 详细信息,以及所有的 Trial 任务。 查看[这里](docs/zh_CN/WebUI.md)更多页面示例 * 在浏览器中打开 `Web UI url`,可看到下图的 Experiment 详细信息,以及所有的 Trial 任务。 查看[这里](docs/zh_CN/WebUI.md)更多页面。
<table style="border: none"> <table style="border: none">
<th><img src="./docs/img/webui_overview_page.png" alt="drawing" width="395"/></th> <th><img src="./docs/img/webui_overview_page.png" alt="drawing" width="395"/></th>
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
scikit-learn 0.20.0 scikit-learn 0.20.0
pandas 0.23.4 pandas 0.23.4
lightgbm 2.2.2 lightgbm 2.2.2
NNI v0.6 NNI v0.7
此 Dockerfile 可作为定制的参考。 此 Dockerfile 可作为定制的参考。
......
...@@ -51,15 +51,15 @@ ...@@ -51,15 +51,15 @@
powershell powershell
Python >= 3.5 Python >= 3.5
Pip Pip
Node.js
Yarn Yarn
tar
* **如何生成** * **如何生成**
参数 `version_os` 用来选择使用 64 位还是 32 位 Windows 来生成。
```bash ```bash
powershell ./install.ps1 powershell ./install.ps1 -version_os [64/32]
``` ```
* **如何上传** * **如何上传**
......
...@@ -12,7 +12,7 @@ With the NFS setup (see below), trial code can share model weight through loadin ...@@ -12,7 +12,7 @@ With the NFS setup (see below), trial code can share model weight through loadin
```yaml ```yaml
tuner: tuner:
codeDir: path/to/customer_tuner codeDir: path/to/customer_tuner
classFileName: customer_tuner.py classFileName: customer_tuner.py
className: CustomerTuner className: CustomerTuner
classArgs: classArgs:
... ...
......
# NNI Annotation # NNI Annotation
## Overview ## Overview
To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code. To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code.
Below is an example: Below is an example:
```python ```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)''' '''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
learning_rate = 0.1 learning_rate = 0.1
``` ```
The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line. The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line.
......
######################
Blog
######################
.. toctree::
:maxdepth: 2
NAS Comparison<NASComparison>
...@@ -10,7 +10,7 @@ Below we divide introduction of the BOHB process into two parts: ...@@ -10,7 +10,7 @@ Below we divide introduction of the BOHB process into two parts:
### HB (Hyperband) ### HB (Hyperband)
We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](hyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below. We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](HyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below.
![](../img/bohb_1.png) ![](../img/bohb_1.png)
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
NNI provides state-of-the-art tuning algorithm as our builtin-tuners and makes them easy to use. Below is the brief summary of NNI currently built-in Tuners: NNI provides state-of-the-art tuning algorithm as our builtin-tuners and makes them easy to use. Below is the brief summary of NNI currently built-in Tuners:
Note: Click the **Tuner's name** to get a detailed description of the algorithm, click the corresponding **Usage** to get the Tuner's installation requirements, suggested scenario and using example. Note: Click the **Tuner's name** to get a detailed description of the algorithm, click the corresponding **Usage** to get the Tuner's installation requirements, suggested scenario and using example. Here is an [article](./Blog/HPOComparison.md) about the comparison of different Tuners on several problems.
Currently we support the following algorithms: Currently we support the following algorithms:
...@@ -366,4 +366,4 @@ advisor: ...@@ -366,4 +366,4 @@ advisor:
min_budget: 1 min_budget: 1
max_budget: 27 max_budget: 27
eta: 3 eta: 3
``` ```
\ No newline at end of file
# NAS Algorithms Comparison # Neural Architecture Search Comparison
*Posted by Anonymous Author* *Posted by Anonymous Author*
Train and Compare NAS models including Autokeras, DARTS, ENAS and NAO. Train and Compare NAS (Neural Architecture Search) models including Autokeras, DARTS, ENAS and NAO.
Their source code link is as below: Their source code link is as below:
...@@ -17,8 +17,6 @@ Their source code link is as below: ...@@ -17,8 +17,6 @@ Their source code link is as below:
To avoid over-fitting in **CIFAR-10**, we also compare the models in the other five datasets including Fashion-MNIST, CIFAR-100, OUI-Adience-Age, ImageNet-10-1 (subset of ImageNet), ImageNet-10-2 (another subset of ImageNet). We just sample a subset with 10 different labels from ImageNet to make ImageNet-10-1 or ImageNet-10-2. To avoid over-fitting in **CIFAR-10**, we also compare the models in the other five datasets including Fashion-MNIST, CIFAR-100, OUI-Adience-Age, ImageNet-10-1 (subset of ImageNet), ImageNet-10-2 (another subset of ImageNet). We just sample a subset with 10 different labels from ImageNet to make ImageNet-10-1 or ImageNet-10-2.
| Dataset | Training Size | Numer of Classes | Descriptions | | Dataset | Training Size | Numer of Classes | Descriptions |
| :----------------------------------------------------------- | ------------- | ---------------- | ------------------------------------------------------------ | | :----------------------------------------------------------- | ------------- | ---------------- | ------------------------------------------------------------ |
| [Fashion-MNIST](<https://github.com/zalandoresearch/fashion-mnist>) | 60,000 | 10 | T-shirt/top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag and ankle boot. | | [Fashion-MNIST](<https://github.com/zalandoresearch/fashion-mnist>) | 60,000 | 10 | T-shirt/top, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag and ankle boot. |
...@@ -38,7 +36,7 @@ For NAO, it requires too much computing resources, so we only use NAO-WS which p ...@@ -38,7 +36,7 @@ For NAO, it requires too much computing resources, so we only use NAO-WS which p
For AutoKeras, we used 0.2.18 version because it was the latest version when we started the experiment. For AutoKeras, we used 0.2.18 version because it was the latest version when we started the experiment.
## NAS Performance ## NAS Performance
| NAS | AutoKeras (%) | ENAS (macro) (%) | ENAS (micro) (%) | DARTS (%) | NAO-WS (%) | | NAS | AutoKeras (%) | ENAS (macro) (%) | ENAS (micro) (%) | DARTS (%) | NAO-WS (%) |
| --------------- | :-----------: | :--------------: | :--------------: | :-------: | :--------: | | --------------- | :-----------: | :--------------: | :--------------: | :-------: | :--------: |
...@@ -49,9 +47,7 @@ For AutoKeras, we used 0.2.18 version because it was the latest version when we ...@@ -49,9 +47,7 @@ For AutoKeras, we used 0.2.18 version because it was the latest version when we
| ImageNet-10-1 | 61.80 | 77.07 | 79.80 | **80.48** | 77.20 | | ImageNet-10-1 | 61.80 | 77.07 | 79.80 | **80.48** | 77.20 |
| ImageNet-10-2 | 37.20 | 58.13 | 56.47 | 60.53 | **61.20** | | ImageNet-10-2 | 37.20 | 58.13 | 56.47 | 60.53 | **61.20** |
Unfortunately, we cannot reproduce all the results in the paper.
Unfortunately, we cannot reproduce all the results in the paper.
The best or average results reported in the paper: The best or average results reported in the paper:
...@@ -59,8 +55,6 @@ The best or average results reported in the paper: ...@@ -59,8 +55,6 @@ The best or average results reported in the paper:
| --------- | ------------ | :--------------: | :--------------: | :------------: | :---------: | | --------- | ------------ | :--------------: | :--------------: | :------------: | :---------: |
| CIFAR- 10 | 88.56(best) | 96.13(best) | 97.11(best) | 97.17(average) | 96.47(best) | | CIFAR- 10 | 88.56(best) | 96.13(best) | 97.11(best) | 97.17(average) | 96.47(best) |
For AutoKeras, it has relatively worse performance across all datasets due to its random factor on network morphism. For AutoKeras, it has relatively worse performance across all datasets due to its random factor on network morphism.
For ENAS, ENAS (macro) shows good results in OUI-Adience-Age and ENAS (micro) shows good results in CIFAR-10. For ENAS, ENAS (macro) shows good results in OUI-Adience-Age and ENAS (micro) shows good results in CIFAR-10.
...@@ -78,6 +72,3 @@ For NAO-WS, it shows good results in ImageNet-10-2 but it can perform very poorl ...@@ -78,6 +72,3 @@ For NAO-WS, it shows good results in ImageNet-10-2 but it can perform very poorl
3. Pham, Hieu, et al. "Efficient Neural Architecture Search via Parameters Sharing." international conference on machine learning (2018): 4092-4101. 3. Pham, Hieu, et al. "Efficient Neural Architecture Search via Parameters Sharing." international conference on machine learning (2018): 4092-4101.
4. Luo, Renqian, et al. "Neural Architecture Optimization." neural information processing systems (2018): 7827-7838. 4. Luo, Renqian, et al. "Neural Architecture Optimization." neural information processing systems (2018): 7827-7838.
# Hyperparameter Optimization Comparison
*Posted by Anonymous Author*
Comparison of Hyperparameter Optimization algorithms on several problems.
Hyperparameter Optimization algorithms are list below:
- [Random Search](../Builtin_Tuner.md#Random)
- [Grid Search](../Builtin_Tuner.md#Random)
- [Evolution](../Builtin_Tuner.md#Evolution)
- [Anneal](../Builtin_Tuner.md#Anneal)
- [Metis](../Builtin_Tuner.md#MetisTuner)
- [TPE](../Builtin_Tuner.md#TPE)
- [SMAC](../Builtin_Tuner.md#SMAC)
- [HyperBand](../Builtin_Tuner.md#Hyperband)
- [BOHB](../Builtin_Tuner.md#BOHB)
All algorithms run in NNI local environment。
Machine Environment:
```
OS: Linux Ubuntu 16.04 LTS
CPU: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz 2600 MHz
Memory: 112 GB
NNI Version: v0.7
NNI Mode(local|pai|remote): local
Python version: 3.6
Is conda or virtualenv used?: Conda
is running in docker?: no
```
## AutoGBDT Example
### Problem Description
Nonconvex problem on the hyper-parameter search of [AutoGBDT](../gbdt_example.md) example.
### Search Space
```json
{
"num_leaves": {
"_type": "choice",
"_value": [10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 48, 64, 96, 128]
},
"learning_rate": {
"_type": "choice",
"_value": [0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.5]
},
"max_depth": {
"_type": "choice",
"_value": [-1, 2, 3, 4, 5, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 28, 32, 48, 64, 96, 128]
},
"feature_fraction": {
"_type": "choice",
"_value": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
},
"bagging_fraction": {
"_type": "choice",
"_value": [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
},
"bagging_freq": {
"_type": "choice",
"_value": [1, 2, 4, 8, 10, 12, 14, 16]
}
}
```
The total search space is 1,204,224, we set the number of maximum trial to 1000. The time limitation is 48 hours.
### Results
| Algorithm | Best loss | Average of Best 5 Losses | Average of Best 10 Losses |
| ------------- | ------------ | ------------- | ------------- |
| Random Search |0.418854|0.420352|0.421553|
| Random Search |0.417364|0.420024|0.420997|
| Random Search |0.417861|0.419744|0.420642|
| Grid Search |0.498166|0.498166|0.498166|
| Evolution |0.409887|0.409887|0.409887|
| Evolution |0.413620|0.413875|0.414067|
| Evolution |0.409887|0.409887|0.409887|
| Anneal |0.414877|0.417289|0.418281|
| Anneal |0.409887|0.409887|0.410118|
| Anneal |0.413683|0.416949|0.417537|
| Metis |0.416273|0.420411|0.422380|
| Metis |0.420262|0.423175|0.424816|
| Metis |0.421027|0.424172|0.425714|
| TPE |0.414478|0.414478|0.414478|
| TPE |0.415077|0.417986|0.418797|
| TPE |0.415077|0.417009|0.418053|
| SMAC |**0.408386**|**0.408386**|**0.408386**|
| SMAC |0.414012|0.414012|0.414012|
| SMAC |**0.408386**|**0.408386**|**0.408386**|
| BOHB |0.410464|0.415319|0.417755|
| BOHB |0.418995|0.420268|0.422604|
| BOHB |0.415149|0.418072|0.418932|
| HyperBand |0.414065|0.415222|0.417628|
| HyperBand |0.416807|0.417549|0.418828|
| HyperBand |0.415550|0.415977|0.417186|
For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process.
## RocksDB Benchmark 'fillrandom' and 'readrandom'
### Problem Description
[DB_Bench](<https://github.com/facebook/rocksdb/wiki/Benchmarking-tools>) is the main tool that is used to benchmark [RocksDB](https://rocksdb.org/)'s performance. It has so many hapermeter to tune.
The performance of `DB_Bench` is associated with the machine configuration and installation method. We run the `DB_Bench`in the Linux machine and install the Rock in shared library.
#### Machine configuration
```
RocksDB: version 6.1
CPU: 6 * Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
CPUCache: 35840 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
```
#### Storage performance
**Latency**: each IO request will take some time to complete, this is called the average latency. There are several factors that would affect this time including network connection quality and hard disk IO performance.
**IOPS**: **IO operations per second**, which means the amount of _read or write operations_ that could be done in one seconds time.
**IO size**: **the size of each IO request**. Depending on the operating system and the application/service that needs disk access it will issue a request to read or write a certain amount of data at the same time.
**Throughput (in MB/s) = Average IO size x IOPS **
IOPS is related to online processing ability and we use the IOPS as the metric in my experiment.
### Search Space
```json
{
"max_background_compactions": {
"_type": "quniform",
"_value": [1, 256, 1]
},
"block_size": {
"_type": "quniform",
"_value": [1, 500000, 1]
},
"write_buffer_size": {
"_type": "quniform",
"_value": [1, 130000000, 1]
},
"max_write_buffer_number": {
"_type": "quniform",
"_value": [1, 128, 1]
},
"min_write_buffer_number_to_merge": {
"_type": "quniform",
"_value": [1, 32, 1]
},
"level0_file_num_compaction_trigger": {
"_type": "quniform",
"_value": [1, 256, 1]
},
"level0_slowdown_writes_trigger": {
"_type": "quniform",
"_value": [1, 1024, 1]
},
"level0_stop_writes_trigger": {
"_type": "quniform",
"_value": [1, 1024, 1]
},
"cache_size": {
"_type": "quniform",
"_value": [1, 30000000, 1]
},
"compaction_readahead_size": {
"_type": "quniform",
"_value": [1, 30000000, 1]
},
"new_table_reader_for_compaction_inputs": {
"_type": "randint",
"_value": [1]
}
}
```
The search space is enormous (about 10^40) and we set the maximum number of trial to 100 to limit the computation resource.
### Results
#### fillrandom' Benchmark
| Model | Best IOPS (Repeat 1) | Best IOPS (Repeat 2) | Best IOPS (Repeat 3) |
| --------- | -------------------- | -------------------- | -------------------- |
| Random | 449901 | 427620 | 477174 |
| Anneal | 461896 | 467150 | 437528 |
| Evolution | 436755 | 389956 | 389790 |
| TPE | 378346 | 482316 | 468989 |
| SMAC | 491067 | 490472 | **491136** |
| Metis | 444920 | 457060 | 454438 |
Figure:
![](../../img/hpo_rocksdb_fillrandom.png)
#### 'readrandom' Benchmark
| Model | Best IOPS (Repeat 1) | Best IOPS (Repeat 2) | Best IOPS (Repeat 3) |
| --------- | -------------------- | -------------------- | -------------------- |
| Random | 2276157 | 2285301 | 2275142 |
| Anneal | 2286330 | 2282229 | 2284012 |
| Evolution | 2286524 | 2283673 | 2283558 |
| TPE | 2287366 | 2282865 | 2281891 |
| SMAC | 2270874 | 2284904 | 2282266 |
| Metis | **2287696** | 2283496 | 2277701 |
Figure:
![](../../img/hpo_rocksdb_readrandom.png)
# Automatically tuning SVD on NNI
In this tutorial, we first introduce a github repo [Recommenders](https://github.com/Microsoft/Recommenders). It is a repository that provides examples and best practices for building recommendation systems, provided as Jupyter notebooks. It has various models that are popular and widely deployed in recommendation systems. To provide a complete end-to-end experience, they present each example in five key tasks, as shown below:
- [Prepare Data](https://github.com/Microsoft/Recommenders/blob/master/notebooks/01_prepare_data/README.md): Preparing and loading data for each recommender algorithm
- [Model](https://github.com/Microsoft/Recommenders/blob/master/notebooks/02_model/README.md): Building models using various classical and deep learning recommender algorithms such as Alternating Least Squares ([ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.html#ALS)) or eXtreme Deep Factorization Machines ([xDeepFM](https://arxiv.org/abs/1803.05170)).
- [Evaluate](https://github.com/Microsoft/Recommenders/blob/master/notebooks/03_evaluate/README.md): Evaluating algorithms with offline metrics
- [Model Select and Optimize](https://github.com/Microsoft/Recommenders/blob/master/notebooks/04_model_select_and_optimize/README.md): Tuning and optimizing hyperparameters for recommender models
- [Operationalize](https://github.com/Microsoft/Recommenders/blob/master/notebooks/05_operationalize/README.md): Operationalizing models in a production environment on Azure
The fourth task is tuning and optimizing the model's hyperparametrs, this is where NNI could help. To give a concrete example that NNI tunes the models in Recommenders, let's demonstrate with the model [SVD](https://github.com/Microsoft/Recommenders/blob/master/notebooks/02_model/surprise_svd_deep_dive.ipynb), and data Movielens100k. There are more than 10 hyperparameters to be tuned in this model.
[This Jupyter notebook](https://github.com/Microsoft/Recommenders/blob/master/notebooks/04_model_select_and_optimize/nni_surprise_svd.ipynb) provided by Recommenders is a very detailed step-by-step tutorial for this example. It uses different built-in tuning algorithms in NNI, including `Annealing`, `SMAC`, `Random Search`, `TPE`, `Hyperband`, `Metis` and `Evolution`. Finally, the results of different tuning algorithms are compared. Please go through this notebook to learn how to use NNI to tune SVD model, then you could further use NNI to tune other models in Recommenders.
\ No newline at end of file
...@@ -28,7 +28,7 @@ When raising issues, please specify the following: ...@@ -28,7 +28,7 @@ When raising issues, please specify the following:
Provide PRs with appropriate tags for bug fixes or enhancements to the source code. Do follow the correct naming conventions and code styles when you work on and do try to implement all code reviews along the way. Provide PRs with appropriate tags for bug fixes or enhancements to the source code. Do follow the correct naming conventions and code styles when you work on and do try to implement all code reviews along the way.
If you are looking for How to develop and debug the NNI source code, you can refer to [How to set up NNI developer environment doc](./SetupNNIDeveloperEnvironment.md) file in the `docs` folder. If you are looking for How to develop and debug the NNI source code, you can refer to [How to set up NNI developer environment doc](./SetupNniDeveloperEnvironment.md) file in the `docs` folder.
Similarly for [Quick Start](QuickStart.md). For everything else, refer to [NNI Home page](http://nni.readthedocs.io). Similarly for [Quick Start](QuickStart.md). For everything else, refer to [NNI Home page](http://nni.readthedocs.io).
...@@ -39,7 +39,7 @@ A person looking to contribute can take up an issue by claiming it as a comment/ ...@@ -39,7 +39,7 @@ A person looking to contribute can take up an issue by claiming it as a comment/
## Code Styles & Naming Conventions ## Code Styles & Naming Conventions
* We follow [PEP8](https://www.python.org/dev/peps/pep-0008/) for Python code and naming conventions, do try to adhere to the same when making a pull request or making a change. One can also take the help of linters such as `flake8` or `pylint` * We follow [PEP8](https://www.python.org/dev/peps/pep-0008/) for Python code and naming conventions, do try to adhere to the same when making a pull request or making a change. One can also take the help of linters such as `flake8` or `pylint`
* We also follow [NumPy Docstring Style](https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy) for Python Docstring Conventions. During the [documentation building](CONTRIBUTING.md#documentation), we use [sphinx.ext.napoleon](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate Python API documentation from Docstring. * We also follow [NumPy Docstring Style](https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy) for Python Docstring Conventions. During the [documentation building](Contributing.md#documentation), we use [sphinx.ext.napoleon](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate Python API documentation from Docstring.
## Documentation ## Documentation
Our documentation is built with [sphinx](http://sphinx-doc.org/), supporting [Markdown](https://guides.github.com/features/mastering-markdown/) and [reStructuredText](http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) format. All our documentations are placed under [docs/en_US](https://github.com/Microsoft/nni/tree/master/docs). Our documentation is built with [sphinx](http://sphinx-doc.org/), supporting [Markdown](https://guides.github.com/features/mastering-markdown/) and [reStructuredText](http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) format. All our documentations are placed under [docs/en_US](https://github.com/Microsoft/nni/tree/master/docs).
......
...@@ -109,4 +109,4 @@ More detail example you could see: ...@@ -109,4 +109,4 @@ More detail example you could see:
### Write a more advanced automl algorithm ### Write a more advanced automl algorithm
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](Customize_Advisor.md) for how to write a customized advisor. The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor.
\ No newline at end of file \ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment