Unverified Commit 9fae194a authored by SparkSnail's avatar SparkSnail Committed by GitHub
Browse files

Merge pull request #206 from microsoft/master

merge master
parents 8fe2588b 41e58703
...@@ -18,7 +18,7 @@ NNI (Neural Network Intelligence) is a toolkit to help users run automated machi ...@@ -18,7 +18,7 @@ NNI (Neural Network Intelligence) is a toolkit to help users run automated machi
The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud. The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.
### **NNI [v0.9](https://github.com/Microsoft/nni/releases) has been released! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>** ### **NNI [v1.0](https://github.com/Microsoft/nni/blob/master/docs/en_US/Release_v1.0.md) has been released! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
<p align="center"> <p align="center">
<a href="#nni-has-been-released"><img src="docs/img/overview.svg" /></a> <a href="#nni-has-been-released"><img src="docs/img/overview.svg" /></a>
...@@ -70,8 +70,8 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search ...@@ -70,8 +70,8 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
<ul> <ul>
<li><b>Examples</b></li> <li><b>Examples</b></li>
<ul> <ul>
<li><a href="examples/trials/mnist-distributed-pytorch">MNIST-pytorch</li></a> <li><a href="examples/trials/mnist-pytorch">MNIST-pytorch</li></a>
<li><a href="examples/trials/mnist-distributed">MNIST-tensorflow</li></a> <li><a href="examples/trials/mnist">MNIST-tensorflow</li></a>
<li><a href="examples/trials/mnist-keras">MNIST-keras</li></a> <li><a href="examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="docs/en_US/TrialExample/GbdtExample.md">Auto-gbdt</a></li> <li><a href="docs/en_US/TrialExample/GbdtExample.md">Auto-gbdt</a></li>
<li><a href="docs/en_US/TrialExample/Cifar10Examples.md">Cifar10-pytorch</li></a> <li><a href="docs/en_US/TrialExample/Cifar10Examples.md">Cifar10-pytorch</li></a>
...@@ -100,7 +100,7 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search ...@@ -100,7 +100,7 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#BOHB">BOHB</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#BOHB">BOHB</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#GPTuner">GP Tuner</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#GPTuner">GP Tuner</a></li>
</ul> </ul>
<li><b>Tuner for <a href="docs/en_US/CommunitySharings/NasComparision.md">NAS</a></b></li> <li><b>Tuner for <a href="docs/en_US/AdvancedFeature/GeneralNasInterfaces.md">NAS</a></b></li>
<ul> <ul>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li>
<li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li> <li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li>
...@@ -182,7 +182,7 @@ We encourage researchers and students leverage these projects to accelerate the ...@@ -182,7 +182,7 @@ We encourage researchers and students leverage these projects to accelerate the
**Install through pip** **Install through pip**
* We support Linux, MacOS and Windows(local, remote and pai mode) in current stage, Ubuntu 16.04 or higher, MacOS 10.14.1 along with Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`. * We support Linux, MacOS and Windows (local, remote and pai mode) in current stage, Ubuntu 16.04 or higher, MacOS 10.14.1 along with Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`.
Linux and MacOS Linux and MacOS
...@@ -211,7 +211,7 @@ Linux and MacOS ...@@ -211,7 +211,7 @@ Linux and MacOS
* Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`. * Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`.
```bash ```bash
git clone -b v0.9 https://github.com/Microsoft/nni.git git clone -b v1.0 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
``` ```
...@@ -221,7 +221,7 @@ Windows ...@@ -221,7 +221,7 @@ Windows
* Run the following commands in an environment that has `python >=3.5`, `git` and `PowerShell` * Run the following commands in an environment that has `python >=3.5`, `git` and `PowerShell`
```bash ```bash
git clone -b v0.9 https://github.com/Microsoft/nni.git git clone -b v1.0 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell -ExecutionPolicy Bypass -file install.ps1 powershell -ExecutionPolicy Bypass -file install.ps1
``` ```
...@@ -237,7 +237,7 @@ The following example is an experiment built on TensorFlow. Make sure you have * ...@@ -237,7 +237,7 @@ The following example is an experiment built on TensorFlow. Make sure you have *
* Download the examples via clone the source code. * Download the examples via clone the source code.
```bash ```bash
git clone -b v0.9 https://github.com/Microsoft/nni.git git clone -b v1.0 https://github.com/Microsoft/nni.git
``` ```
Linux and MacOS Linux and MacOS
...@@ -345,18 +345,27 @@ Before providing your hacks, you can review the [Contributing Instruction](docs/ ...@@ -345,18 +345,27 @@ Before providing your hacks, you can review the [Contributing Instruction](docs/
* [Implement customized TrainingService](docs/en_US/TrainingService/HowToImplementTrainingService.md) * [Implement customized TrainingService](docs/en_US/TrainingService/HowToImplementTrainingService.md)
## **External Repositories** ## **External Repositories and References**
Now we have some external usage examples run in NNI from our contributors. Thanks our lovely contributors. And welcome more and more people to join us! With authors' permission, we listed a set of NNI usage examples and relevant articles.
* Run [ENAS](examples/tuners/enas_nni/README.md) in NNI * ### **External Repositories** ###
* Run [Neural Network Architecture Search](examples/trials/nas_cifar10/README.md) in NNI * Run [ENAS](examples/tuners/enas_nni/README.md) with NNI
* [Automatic Feature Engineering](examples/trials/auto-feature-engineering/README.md) in NNI * Run [Neural Network Architecture Search](examples/trials/nas_cifar10/README.md) with NNI
* [Automatic Feature Engineering](examples/trials/auto-feature-engineering/README.md) with NNI
* [Hyperparameter Tuning for Matrix Factorization](https://github.com/microsoft/recommenders/blob/master/notebooks/04_model_select_and_optimize/nni_surprise_svd.ipynb) with NNI
* ### **Relevant Articles** ###
* [Hyper Parameter Optimization Comparison](docs/en_US/CommunitySharings/HpoComparision.md)
* [Neural Architecture Search Comparison](docs/en_US/CommunitySharings/NasComparision.md)
* [Parallelizing a Sequential Algorithm TPE](docs/en_US/CommunitySharings/ParallelizingTpeSearch.md)
* [Automatically tuning SVD with NNI](docs/en_US/CommunitySharings/RecommendersSvd.md)
* [Automatically tuning SPTAG with NNI](docs/en_US/CommunitySharings/SptagAutoTune.md)
* **Blog (in Chinese)** - [AutoML tools (Advisor, NNI and Google Vizier) comparison](http://gaocegege.com/Blog/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/katib-new#%E6%80%BB%E7%BB%93%E4%B8%8E%E5%88%86%E6%9E%90) by [@gaocegege](https://github.com/gaocegege) - 总结与分析 section of design and implementation of kubeflow/katib
## **Feedback** ## **Feedback**
* Discuss on the NNI [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) in NNI * Discuss on the NNI [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) in NNI.
* Ask a question with NNI tags on [Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true)
* [File an issue](https://github.com/microsoft/nni/issues/new/choose) on GitHub. * [File an issue](https://github.com/microsoft/nni/issues/new/choose) on GitHub.
* Ask a question with NNI tags on [Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true).
## **License** ## **License**
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。 NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。
### **NNI [v0.9](https://github.com/Microsoft/nni/releases) 已发布! &nbsp;[<img width="48" src="docs/img/release_icon.png" />](#nni-released-reminder)** ### **NNI [v1.0](https://github.com/Microsoft/nni/blob/master/docs/zh_CN/Release_v1.0.md) 已发布! &nbsp;[<img width="48" src="docs/img/release_icon.png" />](#nni-released-reminder)**
<p align="center"> <p align="center">
<a href="#nni-has-been-released"><img src="docs/img/overview.svg" /></a> <a href="#nni-has-been-released"><img src="docs/img/overview.svg" /></a>
...@@ -19,8 +19,10 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -19,8 +19,10 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
<table> <table>
<tbody> <tbody>
<tr align="center" valign="bottom"> <tr align="center" valign="bottom">
<td>
</td>
<td> <td>
<b>支持的框架</b> <b>支持的框架和库</b>
<img src="docs/img/bar.png"/> <img src="docs/img/bar.png"/>
</td> </td>
<td> <td>
...@@ -34,26 +36,52 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -34,26 +36,52 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
</tr> </tr>
</tr> </tr>
<tr valign="top"> <tr valign="top">
<td align="center" valign="middle">
<b>内置</b>
</td>
<td> <td>
<ul><li><b>支持的框架</b></li>
<ul> <ul>
<li>PyTorch</li> <li>PyTorch</li>
<li>TensorFlow</li>
<li>Keras</li> <li>Keras</li>
<li>TensorFlow</li>
<li>MXNet</li> <li>MXNet</li>
<li>Caffe2</li> <li>Caffe2</li>
<li>CNTK (Python 语言)</li> <a href="docs/zh_CN/SupportedFramework_Library.md">更多...</a><br/>
<li>Chainer</li> </ul>
<li>Theano</li> </ul>
<ul>
<li><b>支持的库</b></li>
<ul>
<li>Scikit-learn</li>
<li>XGBoost</li>
<li>LightGBM</li>
<a href="docs/zh_CN/SupportedFramework_Library.md">更多...</a><br/>
</ul>
</ul>
<ul>
<li><b>示例</b></li>
<ul>
<li><a href="examples/trials/mnist-pytorch">MNIST-pytorch</li></a>
<li><a href="examples/trials/mnist">MNIST-tensorflow</li></a>
<li><a href="examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="docs/zh_CN/TrialExample/GbdtExample.md">Auto-gbdt</a></li>
<li><a href="docs/zh_CN/TrialExample/Cifar10Examples.md">Cifar10-pytorch</li></a>
<li><a href="docs/zh_CN/TrialExample/SklearnExamples.md">Scikit-learn</a></li>
<a href="docs/zh_CN/SupportedFramework_Library.md">更多...</a><br/>
</ul>
</ul> </ul>
</td> </td>
<td align="left"> <td align="left" >
<a href="docs/zh_CN/Tuner/BuiltinTuner.md">Tuner(调参器)</a> <a href="docs/zh_CN/Tuner/BuiltinTuner.md">Tuner(调参器)</a>
<br />
<ul> <ul>
<b style="margin-left:-20px">通用 Tuner</b> <li><b>通用 Tuner</b></li>
<ul>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#Random">Random Search(随机搜索)</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#Random">Random Search(随机搜索)</a></li>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#Evolution">Naïve Evolution(进化算法)</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#Evolution">Naïve Evolution(朴素进化)</a></li>
<b style="margin-left:-20px">超参 Tuner</b> </ul>
<li><b><a href="docs/zh_CN/CommunitySharings/HpoComparision.md">超参调优</a> Tuner</b></li>
<ul>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#TPE">TPE</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#TPE">TPE</a></li>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#Anneal">Anneal(退火算法)</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#Anneal">Anneal(退火算法)</a></li>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#SMAC">SMAC</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#SMAC">SMAC</a></li>
...@@ -63,14 +91,19 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -63,14 +91,19 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#BOHB">BOHB</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#BOHB">BOHB</a></li>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#GPTuner">GP Tuner</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#GPTuner">GP Tuner</a></li>
<b style="margin-left:-20px">网络结构 Tuner</b> </ul>
<li><b><a href="docs/zh_CN/AdvancedFeature/GeneralNasInterfaces.md">NAS</a> Tuner</b></li>
<ul>
<li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li> <li><a href="docs/zh_CN/Tuner/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li>
<li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li> <li><a href="examples/tuners/enas_nni/README_zh_CN.md">ENAS</a></li>
</ul> </ul>
</ul>
<a href="docs/zh_CN/Assessor/BuiltinAssessor.md">Assessor(评估器)</a> <a href="docs/zh_CN/Assessor/BuiltinAssessor.md">Assessor(评估器)</a>
<ul> <ul>
<ul>
<li><a href="docs/zh_CN/Assessor/BuiltinAssessor.md#Medianstop">Median Stop(中位数终止)</a></li> <li><a href="docs/zh_CN/Assessor/BuiltinAssessor.md#Medianstop">Median Stop(中位数终止)</a></li>
<li><a href="docs/zh_CN/Assessor/BuiltinAssessor.md#Curvefitting">Curve Fitting(曲线拟合)</a></li> <li><a href="docs/zh_CN/Assessor/BuiltinAssessor.md#Curvefitting">Curve Fitting(曲线拟合)</a></li>
</ul>
</ul> </ul>
</td> </td>
<td> <td>
...@@ -85,6 +118,33 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -85,6 +118,33 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
</ul> </ul>
</td> </td>
</tr> </tr>
<tr align="center" valign="bottom">
</td>
</tr>
<tr valign="top">
<td valign="middle">
<b>参考</b>
</td>
<td style="border-top:#FF0000 solid 0px;">
<ul>
<li><a href="docs/zh_CN/sdk_reference.rst">Python API</a></li>
<li><a href="docs/zh_CN/Tutorial/AnnotationSpec.md">NNI Annotation</a></li>
<li><a href="docs/zh_CN/Tutorial/Installation.md">支持的操作系统</a></li>
</ul>
</td>
<td style="border-top:#FF0000 solid 0px;">
<ul>
<li><a href="docs/zh_CN/Tuner/CustomizeTuner.md">自定义 Tuner</a></li>
<li><a href="docs/zh_CN/Assessor/CustomizeAssessor.md">自定义 Assessor</a></li>
</ul>
</td>
<td style="border-top:#FF0000 solid 0px;">
<ul>
<li><a href="docs/zh_CN/TrainingService/SupportTrainingService.md">支持训练平台</li>
<li><a href="docs/zh_CN/TrainingService/HowToImplementTrainingService.md">实现训练平台</a></li>
</ul>
</td>
</tr>
</tbody> </tbody>
</table> </table>
...@@ -139,7 +199,7 @@ Linux 和 macOS ...@@ -139,7 +199,7 @@ Linux 和 macOS
*`python >= 3.5` 的环境中运行命令: `git``wget`,确保安装了这两个组件。 *`python >= 3.5` 的环境中运行命令: `git``wget`,确保安装了这两个组件。
```bash ```bash
git clone -b v0.9 https://github.com/Microsoft/nni.git git clone -b v1.0 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
``` ```
...@@ -149,7 +209,7 @@ Windows ...@@ -149,7 +209,7 @@ Windows
*`python >=3.5` 的环境中运行命令: `git``PowerShell`,确保安装了这两个组件。 *`python >=3.5` 的环境中运行命令: `git``PowerShell`,确保安装了这两个组件。
```bash ```bash
git clone -b v0.9 https://github.com/Microsoft/nni.git git clone -b v1.0 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell -ExecutionPolicy Bypass -file install.ps1 powershell -ExecutionPolicy Bypass -file install.ps1
``` ```
...@@ -165,7 +225,7 @@ Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/Tutorial/NniOnWindows.md) ...@@ -165,7 +225,7 @@ Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/Tutorial/NniOnWindows.md)
* 通过克隆源代码下载示例。 * 通过克隆源代码下载示例。
```bash ```bash
git clone -b v0.9 https://github.com/Microsoft/nni.git git clone -b v1.0 https://github.com/Microsoft/nni.git
``` ```
Linux 和 macOS Linux 和 macOS
...@@ -227,64 +287,75 @@ You can use these commands to get more information about the experiment ...@@ -227,64 +287,75 @@ You can use these commands to get more information about the experiment
* [NNI 概述](docs/zh_CN/Overview.md) * [NNI 概述](docs/zh_CN/Overview.md)
* [快速入门](docs/zh_CN/Tutorial/QuickStart.md) * [快速入门](docs/zh_CN/Tutorial/QuickStart.md)
* [贡献](docs/zh_CN/Tutorial/Contributing.md)
* [示例](docs/zh_CN/examples.rst)
* [参考](docs/zh_CN/reference.rst)
* [Web 界面教程](docs/zh_CN/Tutorial/WebUI.md) * [Web 界面教程](docs/zh_CN/Tutorial/WebUI.md)
* [贡献](docs/zh_CN/Tutorial/Contributing.md)
## **入门** ## **入门**
* [安装 NNI](docs/zh_CN/Tutorial/Installation.md) * [安装 NNI](docs/zh_CN/Tutorial/Installation.md)
* [使用命令行工具 nnictl](docs/zh_CN/Tutorial/Nnictl.md) * [使用命令行工具 nnictl](docs/zh_CN/Tutorial/Nnictl.md)
* [使用 NNIBoard](docs/zh_CN/Tutorial/WebUI.md) * [实现 Trial](docs/zh_CN/TrialExample/Trials.md)
* [如何定义搜索空间](docs/zh_CN/Tutorial/SearchSpaceSpec.md)
* [如何实现 Trial 代码](docs/zh_CN/TrialExample/Trials.md)
* [如何选择 Tuner、搜索算法](docs/zh_CN/Tuner/BuiltinTuner.md)
* [配置 Experiment](docs/zh_CN/Tutorial/ExperimentConfig.md) * [配置 Experiment](docs/zh_CN/Tutorial/ExperimentConfig.md)
* [如何使用 Annotation](docs/zh_CN/TrialExample/Trials.md#nni-python-annotation) * [定制搜索空间](docs/zh_CN/Tutorial/SearchSpaceSpec.md)
* [选择 Tuner、搜索算法](docs/zh_CN/Tuner/BuiltinTuner.md)
* [使用 Annotation](docs/zh_CN/TrialExample/Trials.md#nni-python-annotation)
* [使用 NNIBoard](docs/zh_CN/Tutorial/WebUI.md)
## **教程** ## **教程**
* [在本机运行 Experiment (支持多 GPU 卡)](docs/zh_CN/TrainingService/LocalMode.md)
* [在 OpenPAI 上运行 Experiment](docs/zh_CN/TrainingService/PaiMode.md) * [在 OpenPAI 上运行 Experiment](docs/zh_CN/TrainingService/PaiMode.md)
* [在 Kubeflow 上运行 Experiment](docs/zh_CN/TrainingService/KubeflowMode.md) * [在 Kubeflow 上运行 Experiment](docs/zh_CN/TrainingService/KubeflowMode.md)
* [在本机运行 Experiment (支持多 GPU 卡)](docs/zh_CN/TrainingService/LocalMode.md)
* [在多机上运行 Experiment](docs/zh_CN/TrainingService/RemoteMachineMode.md) * [在多机上运行 Experiment](docs/zh_CN/TrainingService/RemoteMachineMode.md)
* [尝试不同的 Tuner](docs/zh_CN/Tuner/BuiltinTuner.md) * [尝试不同的 Tuner](docs/zh_CN/Tuner/BuiltinTuner.md)
* [尝试不同的 Assessor](docs/zh_CN/Assessor/BuiltinAssessor.md) * [尝试不同的 Assessor](docs/zh_CN/Assessor/BuiltinAssessor.md)
* [实现自定义 Tuner](docs/zh_CN/Tuner/CustomizeTuner.md) * [实现自定义 Tuner](docs/zh_CN/Tuner/CustomizeTuner.md)
* [实现自定义 Assessor](docs/zh_CN/Assessor/CustomizeAssessor.md) * [实现自定义 Assessor](docs/zh_CN/Assessor/CustomizeAssessor.md)
* [实现 NNI 训练平台](docs/zh_CN/TrainingService/HowToImplementTrainingService.md)
* [使用进化算法为阅读理解任务找到好模型](docs/zh_CN/TrialExample/SquadEvolutionExamples.md) * [使用进化算法为阅读理解任务找到好模型](docs/zh_CN/TrialExample/SquadEvolutionExamples.md)
* [高级神经网络架构搜索](docs/zh_CN/AdvancedFeature/AdvancedNas.md)
## **贡献** ## **贡献**
非常欢迎通过各种方式参与此项目,例如: 非常欢迎通过各种方式参与此项目,例如:
* 审查[源代码改动](https://github.com/microsoft/nni/pulls) * [报告 Bug](https://github.com/microsoft/nni/issues/new/choose)
* 审查[文档](https://github.com/microsoft/nni/tree/master/docs)中从拼写错误到新内容的任何内容,并提交拉取请求。 * [请求新功能](https://github.com/microsoft/nni/issues/new/choose).
* 建议或询问[如何调试](docs/zh_CN/Tutorial/HowToDebug.md)文档相关的问题。
* 找到标有 ['good first issue'](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)['help-wanted'](https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) 标签的 Issue。这些都是简单的 Issue,新的贡献者可以从这些问题开始。 * 找到标有 ['good first issue'](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)['help-wanted'](https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) 标签的 Issue。这些都是简单的 Issue,新的贡献者可以从这些问题开始。
提交代码前,需要遵循以下的简单准则 编写代码前,可以先看看[贡献指南](docs/zh_CN/Tutorial/Contributing.md)来了解更多信息。 此外,还提供了以下文档
* [NNI 开发环境安装教程](docs/zh_CN/Tutorial/SetupNniDeveloperEnvironment.md)
* [如何调试](docs/zh_CN/Tutorial/HowToDebug.md) * [如何调试](docs/zh_CN/Tutorial/HowToDebug.md)
* [代码风格和命名约定](docs/zh_CN/Tutorial/Contributing.md) * [自定义 Advisor](docs/zh_CN/Tuner/CustomizeAdvisor.md)
* 如何设置 [NNI 开发环境](docs/zh_CN/Tutorial/SetupNniDeveloperEnvironment.md) * [自定义 Tuner](docs/zh_CN/Tuner/CustomizeTuner.md)
* 查看[贡献说明](docs/zh_CN/Tutorial/Contributing.md)并熟悉 NNI 的代码贡献指南 * [实现定制的训练平台](docs/zh_CN/TrainingService/HowToImplementTrainingService.md)
## **外部代码库** ## **其它代码库和参考**
下面是一些贡献者为 NNI 提供的使用示例 谢谢可爱的贡献者! 欢迎越来越多的人加入我们! 经作者许可的一些 NNI 用法示例和相关文档。
* 在 NNI 中运行 [ENAS](examples/tuners/enas_nni/README_zh_CN.md) * ### **外部代码库**
* 在 NNI 中运行 [神经网络架构结构搜索](examples/trials/nas_cifar10/README_zh_CN.md)
* 在 NNI 中运行 [ENAS](examples/tuners/enas_nni/README_zh_CN.md)
* 在 NNI 中运行 [神经网络架构结构搜索](examples/trials/nas_cifar10/README_zh_CN.md)
* [NNI 中的自动特征工程](examples/trials/auto-feature-engineering/README_zh_CN.md)
* 使用 NNI 的 [矩阵分解超参调优](https://github.com/microsoft/recommenders/blob/master/notebooks/04_model_select_and_optimize/nni_surprise_svd.ipynb)
* ### **相关文章**
* [超参数优化的对比](docs/zh_CN/CommunitySharings/HpoComparision.md)
* [神经网络结构搜索的对比](docs/zh_CN/CommunitySharings/NasComparision.md)
* [并行化顺序算法:TPE](docs/zh_CN/CommunitySharings/ParallelizingTpeSearch.md)
* [使用 NNI 为 SVD 自动调参](docs/zh_CN/CommunitySharings/RecommendersSvd.md)
* [使用 NNI 为 SPTAG 自动调参](docs/zh_CN/CommunitySharings/SptagAutoTune.md)
* **博客** - [AutoML 工具(Advisor,NNI 与 Google Vizier)的对比](http://gaocegege.com/Blog/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/katib-new#%E6%80%BB%E7%BB%93%E4%B8%8E%E5%88%86%E6%9E%90) 作者:[@gaocegege](https://github.com/gaocegege) - kubeflow/katib 的设计与实现的总结与分析章节
## **反馈** ## **反馈**
* [报告 Bug](https://github.com/microsoft/nni/issues/new/choose) *[Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 中参与讨论。
* [在 GitHub 上提交问题](https://github.com/microsoft/nni/issues/new/choose)
* [请求新功能](https://github.com/microsoft/nni/issues/new/choose). *[Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true) 上使用 nni 标签提问。
*[Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 中参与讨论
*[Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true) 上使用 nni 的标签提问,或[在 Github 上提交 Issue](https://github.com/microsoft/nni/issues/new/choose)
* 我们正在实现[如何调试](docs/zh_CN/Tutorial/HowToDebug.md)的页面,欢迎提交建议和问题。
## **许可协议** ## **许可协议**
......
...@@ -68,8 +68,8 @@ RUN python3 -m pip --no-cache-dir install Keras==2.1.6 ...@@ -68,8 +68,8 @@ RUN python3 -m pip --no-cache-dir install Keras==2.1.6
# #
# PyTorch # PyTorch
# #
RUN python3 -m pip --no-cache-dir install torch==0.4.1 RUN python3 -m pip --no-cache-dir install torch==1.2.0
RUN python3 -m pip install torchvision==0.2.1 RUN python3 -m pip install torchvision==0.4.0
# #
# sklearn 0.20.0 # sklearn 0.20.0
......
...@@ -40,7 +40,7 @@ for (dirpath, dirnames, filenames) in walk('./nni'): ...@@ -40,7 +40,7 @@ for (dirpath, dirnames, filenames) in walk('./nni'):
files = [path.normpath(path.join(dirpath, filename)) for filename in filenames] files = [path.normpath(path.join(dirpath, filename)) for filename in filenames]
data_files.append((path.normpath(dirpath), files)) data_files.append((path.normpath(dirpath), files))
with open('../../README.md', 'r') as fh: with open('../../README.md', 'r', encoding="utf-8") as fh:
long_description = fh.read() long_description = fh.read()
setuptools.setup( setuptools.setup(
......
# General Programming Interface for Neural Architecture Search (experimental feature) # NNI Programming Interface for Neural Architecture Search (NAS)
_*This is an experimental feature, currently, we only implemented the general NAS programming interface. Weight sharing will be supported in the following releases._ _*This is an **experimental feature**. Currently, we only implemented the general NAS programming interface. Weight sharing will be supported in the following releases._
Automatic neural architecture search is taking an increasingly important role on finding better models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Some of representative works are [NASNet][2], [ENAS][1], [DARTS][3], [Network Morphism][4], and [Evolution][5]. There are new innovations keeping emerging. However, it takes great efforts to implement those algorithms, and it is hard to reuse code base of one algorithm for implementing another. Automatic neural architecture search is taking an increasingly important role on finding better models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Some of representative works are [NASNet][2], [ENAS][1], [DARTS][3], [Network Morphism][4], and [Evolution][5]. There are new innovations keeping emerging. However, it takes great efforts to implement those algorithms, and it is hard to reuse code base of one algorithm for implementing another.
......
# Automatically tuning SPTAG with NNI
[SPTAG](https://github.com/microsoft/SPTAG) (Space Partition Tree And Graph) is a library for large scale vector approximate nearest neighbor search scenario released by [Microsoft Research (MSR)](https://www.msra.cn/) and [Microsoft Bing](https://www.bing.com/).
This library assumes that the samples are represented as vectors and that the vectors can be compared by L2 distances or cosine distances. Vectors returned for a query vector are the vectors that have smallest L2 distance or cosine distances with the query vector.
SPTAG provides two methods: kd-tree and relative neighborhood graph (SPTAG-KDT) and balanced k-means tree and relative neighborhood graph (SPTAG-BKT). SPTAG-KDT is advantageous in index building cost, and SPTAG-BKT is advantageous in search accuracy in very high-dimensional data.
In SPTAG, there are tens of parameters that can be tuned for specified scenarios or datasets. NNI is a great tool for automatically tuning those parameters. The authors of SPTAG tried NNI for the auto tuning and found good-performing parameters easily, thus, they shared the practice of tuning SPTAG on NNI in their document [here](https://github.com/microsoft/SPTAG/blob/master/docs/Parameters.md). Please refer to it for detailed tutorial.
\ No newline at end of file
...@@ -8,6 +8,7 @@ In addtion to the official tutorilas and examples, we encourage community contri ...@@ -8,6 +8,7 @@ In addtion to the official tutorilas and examples, we encourage community contri
:maxdepth: 2 :maxdepth: 2
NNI in Recommenders <RecommendersSvd> NNI in Recommenders <RecommendersSvd>
Automatically tuning SPTAG with NNI <SptagAutoTune>
Neural Architecture Search Comparison <NasComparision> Neural Architecture Search Comparison <NasComparision>
Hyper-parameter Tuning Algorithm Comparsion <HpoComparision> Hyper-parameter Tuning Algorithm Comparsion <HpoComparision>
Parallelizing Optimization for TPE <ParallelizingTpeSearch> Parallelizing Optimization for TPE <ParallelizingTpeSearch>
<p align="center">
<img src=".././img/release-1-title-1.png" width="100%" />
</p>
From September 2018 to September 2019, We are still moving on …
**Great news!**&nbsp;&nbsp;With the tag of **Scalability** and **Ease of Use**, NNI v1.0 is comming. Based on the various types of [Tuning Algorithms](./Tuner/BuiltinTuner.md), NNI has supported the Hyperparameter tuning, Neural Architecture search and Auto-Feature-Engineering, which is an exciting news for algorithmic engineers; besides these, NNI v1.0 has made many improvements in the optimization of tuning algorithm, [WebUI's simplicity and intuition](./Tutorial/WebUI.md) and [Platform diversification](./TrainingService/SupportTrainingService.md). NNI has grown into a more intelligent automated machine learning (AutoML) toolkit.
<br/>
<br/>
<br/>
<p align="center">
<img src=".././img/nni-1.png" width="80%" />
</p>
<br />
<br />
<p align="center">
<img src=".././img/release-1-title-2.png" width="100%" />
</p>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Step one**: Start with the [Tutorial Doc](./Tutorial/Installation.md), and install NNI v1.0 first.<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Step two**: Find a " Hello world example", follow the [Tutorial Doc](./Tutorial/QuickStart.md) and have a Quick Start. <br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**Step three**: Get familiar with the [WebUI Tutorial](./Tutorial/WebUI.md) and let NNI better assists with your tuning tour.<br>
The fully automated tool greatly improves the efficiency of the tuning process. For more detail about the 1.0 updates, you can refer to [Release 1.0](https://github.com/microsoft/nni/releases). More of our advance plan, you can refer to our [Roadmap](https://github.com/microsoft/nni/wiki/Roadmap). Besides, we also welcome more and more contributors to join us, there are many ways to participate, please refer to [How to contribute](./Tutorial/Contributing.md) for more details.
\ No newline at end of file
...@@ -56,7 +56,7 @@ The hyper-parameters used in `Step 1.2 - Get predefined parameters` is defined i ...@@ -56,7 +56,7 @@ The hyper-parameters used in `Step 1.2 - Get predefined parameters` is defined i
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]} "learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
} }
``` ```
Refer to [SearchSpaceSpec.md](../Tutorial/SearchSpaceSpec.md) to learn more about search space. Refer to [define search space](../Tutorial/SearchSpaceSpec.md) to learn more about search space.
>Step 3 - Define Experiment >Step 3 - Define Experiment
......
...@@ -55,6 +55,32 @@ Compared with [LocalMode](LocalMode.md) and [RemoteMachineMode](RemoteMachineMod ...@@ -55,6 +55,32 @@ Compared with [LocalMode](LocalMode.md) and [RemoteMachineMode](RemoteMachineMod
* Optional key. Set the shmMB configuration of OpenPAI, it set the shared memory for one task in the task role. * Optional key. Set the shmMB configuration of OpenPAI, it set the shared memory for one task in the task role.
* authFile * authFile
* Optional key, Set the auth file path for private registry while using PAI mode, [Refer](https://github.com/microsoft/pai/blob/2ea69b45faa018662bc164ed7733f6fdbb4c42b3/docs/faq.md#q-how-to-use-private-docker-registry-job-image-when-submitting-an-openpai-job), you can prepare the authFile and simply provide the local path of this file, NNI will upload this file to HDFS for you. * Optional key, Set the auth file path for private registry while using PAI mode, [Refer](https://github.com/microsoft/pai/blob/2ea69b45faa018662bc164ed7733f6fdbb4c42b3/docs/faq.md#q-how-to-use-private-docker-registry-job-image-when-submitting-an-openpai-job), you can prepare the authFile and simply provide the local path of this file, NNI will upload this file to HDFS for you.
* portList
* Optional key. Set the portList configuration of OpenPAI, it specifies a list of port used in container, [Refer](https://github.com/microsoft/pai/blob/b2324866d0280a2d22958717ea6025740f71b9f0/docs/job_tutorial.md#specification).
The config schema in NNI is shown below:
```
portList:
- label: test
beginAt: 8080
portNumber: 2
```
Let's say you want to launch a tensorboard in the mnist example using the port. So the first step is to write a wrapper script `launch_pai.sh` of `mnist.py`.
```bash
export TENSORBOARD_PORT=PAI_PORT_LIST_${PAI_CURRENT_TASK_ROLE_NAME}_0_tensorboard
tensorboard --logdir . --port ${!TENSORBOARD_PORT} &
python3 mnist.py
```
The config file of portList should be filled as following:
```yaml
trial:
command: bash launch_pai.sh
portList:
- label: tensorboard
beginAt: 0
portNumber: 1
```
Once complete to fill NNI experiment config file and save (for example, save as exp_pai.yml), then run the following command Once complete to fill NNI experiment config file and save (for example, save as exp_pai.yml), then run the following command
``` ```
......
...@@ -33,4 +33,4 @@ abstract class TrainingService { ...@@ -33,4 +33,4 @@ abstract class TrainingService {
} }
``` ```
The parent class of TrainingService has a few abstract functions, users need to inherit the parent class and implement all of these abstract functions. The parent class of TrainingService has a few abstract functions, users need to inherit the parent class and implement all of these abstract functions.
For more information about how to write your own TrainingService, please [refer](https://github.com/SparkSnail/nni/blob/dev-trainingServiceDoc/docs/en_US/TrainingService/HowToImplementTrainingService.md). For more information about how to write your own TrainingService, please [refer](https://github.com/microsoft/nni/blob/master/docs/en_US/TrainingService/HowToImplementTrainingService.md).
...@@ -3,4 +3,6 @@ Grid Search on NNI ...@@ -3,4 +3,6 @@ Grid Search on NNI
## Grid Search ## Grid Search
Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](../Tutorial/SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.** Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file.
\ No newline at end of file
Note that the only acceptable types of search space are `choice`, `quniform`, `randint`.
\ No newline at end of file
# Experiment config reference # Experiment config reference
A config file is needed when create an experiment, the path of the config file is provide to nnictl. A config file is needed when creating an experiment. The path of the config file is provided to `nnictl`.
The config file is written in YAML format, and need to be written correctly. The config file is in YAML format.
This document describes the rule to write config file, and will provide some examples and templates. This document describes the rules to write the config file, and provides some examples and templates.
- [Experiment config reference](#Experiment-config-reference) - [Experiment config reference](#Experiment-config-reference)
- [Template](#Template) - [Template](#Template)
...@@ -519,6 +519,10 @@ machineList: ...@@ -519,6 +519,10 @@ machineList:
__azureShare__ is the share of the azure file storage. __azureShare__ is the share of the azure file storage.
* __uploadRetryCount__
If upload files to azure storage failed, NNI will retry the process of uploading, this field will specify the number of attempts to re-upload files.
* __paiConfig__ * __paiConfig__
* __userName__ * __userName__
......
...@@ -27,9 +27,8 @@ All types of sampling strategies and their parameter are listed here: ...@@ -27,9 +27,8 @@ All types of sampling strategies and their parameter are listed here:
* `{"_type": "choice", "_value": options}` * `{"_type": "choice", "_value": options}`
* Which means the variable's value is one of the options. Here 'options' should be a list. Each element of options is a number of string. It could also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space could be seen as conditional variables. * Which means the variable's value is one of the options. Here `options` should be a list of numbers or a list of strings. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors.
* `options` could also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space could be seen as conditional variables. Here is an simple [example of nested search space definition](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/search_space.json). If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a key `_name` in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/sample.json) which users can get from nni with nested search space definition. Tuners which support nested search space are as follows:
* An simple [example](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/search_space.json) of [nested] search space definition. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a key `_name` in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nested-search-space/sample.json) which users can get from nni with nested search space definition. Tuners which support nested search space is as follows:
- Random Search - Random Search
- TPE - TPE
......
# 神经网络架构搜索的通用编程接口(测试版 # 神经网络架构搜索的 NNI 编程接口(NAS
** 这是一个测试中的功能,目前只实现了通用的 NAS 编程接口。 在随后的版本中会支持权重共享。* ** 这是**实验性的功能**。 目前,仅实现了通用的 NAS 编程接口。 在随后的版本中会支持权重共享。*
自动化的神经网络架构(NAS)搜索在寻找更好的模型方面发挥着越来越重要的作用。 最近的研究工作证明了自动化 NAS 的可行性,并发现了一些超越手动设计和调整的模型。 代表算法有 [NASNet](https://arxiv.org/abs/1707.07012)[ENAS](https://arxiv.org/abs/1802.03268)[DARTS](https://arxiv.org/abs/1806.09055)[Network Morphism](https://arxiv.org/abs/1806.10282),以及 [Evolution](https://arxiv.org/abs/1703.01041) 等。 新的算法还在不断涌现。 然而,实现这些算法需要很大的工作量,且很难重用其它算法的代码库来实现。 自动化的神经网络架构(NAS)搜索在寻找更好的模型方面发挥着越来越重要的作用。 最近的研究工作证明了自动化 NAS 的可行性,并发现了一些超越手动设计和调整的模型。 代表算法有 [NASNet](https://arxiv.org/abs/1707.07012)[ENAS](https://arxiv.org/abs/1802.03268)[DARTS](https://arxiv.org/abs/1806.09055)[Network Morphism](https://arxiv.org/abs/1806.10282),以及 [Evolution](https://arxiv.org/abs/1703.01041) 等。 新的算法还在不断涌现。 然而,实现这些算法需要很大的工作量,且很难重用其它算法的代码库来实现。
...@@ -125,7 +125,7 @@ for _ in range(num): ...@@ -125,7 +125,7 @@ for _ in range(num):
***oneshot_mode***: 遵循[论文](http://proceedings.mlr.press/v80/bender18a/bender18a.pdf)中的训练方法。 与 enas_mode 通过训练大量子图来训练全图有所不同,oneshot_mode 中构建了全图,并将 dropout 添加到候选的输入以及候选的输出操作中。 然后像其它深度学习模型一样进行训练。 [详细说明](#OneshotMode)。 (当前仅支持 TensorFlow)。 ***oneshot_mode***: 遵循[论文](http://proceedings.mlr.press/v80/bender18a/bender18a.pdf)中的训练方法。 与 enas_mode 通过训练大量子图来训练全图有所不同,oneshot_mode 中构建了全图,并将 dropout 添加到候选的输入以及候选的输出操作中。 然后像其它深度学习模型一样进行训练。 [详细说明](#OneshotMode)。 (当前仅支持 TensorFlow)。
要使用 oneshot_mode,需要在配置的 `trial` 部分增加如下字段。 此模式不需要 Tuner,因此不用在配置文件中指定 Tuner。 (注意,当前仍然需要在配置文件中指定任一一个 Tuner。)此模式下也不需要`nni.training_update`,因为在训练过程中不需要特别的更新过程 要使用 oneshot_mode,需要在配置的 `trial` 部分增加如下字段。 此模式中,不需要使用 Tuner,只需要在配置文件中添加任意一个Tuner。 此外,也不需要`nni.training_update`,因为在训练过程中不需要更新
```diff ```diff
trial: trial:
...@@ -139,7 +139,7 @@ trial: ...@@ -139,7 +139,7 @@ trial:
***darts_mode***: 参考 [论文](https://arxiv.org/abs/1806.09055)中的训练方法。 与 oneshot_mode 类似。 有两个不同之处,首先 darts_mode 只将架构权重添加到候选操作的输出中,另外是交错的来训练模型权重和架构权重。 [详细说明](#DartsMode) ***darts_mode***: 参考 [论文](https://arxiv.org/abs/1806.09055)中的训练方法。 与 oneshot_mode 类似。 有两个不同之处,首先 darts_mode 只将架构权重添加到候选操作的输出中,另外是交错的来训练模型权重和架构权重。 [详细说明](#DartsMode)
要使用 darts_mode,需要在配置的 `trial` 部分增加如下字段。 此模式不需要 Tuner,因此不用在配置文件中指定 Tuner。 (注意,当前仍需要在配置文件中指定任意一个 Tuner。 要使用 darts_mode,需要在配置的 `trial` 部分增加如下字段。 此模式中,不需要使用 Tuner,只需要在配置文件中添加任意一个Tuner。
```diff ```diff
trial: trial:
...@@ -166,9 +166,9 @@ for _ in range(num): ...@@ -166,9 +166,9 @@ for _ in range(num):
### enas_mode ### enas_mode
在 enas_mode 中,编译后的 Trial 代码会构建完整的图形(而不是子图),会接收所选择的架构,并在完整的图形上对此体系结构进行小型的批处理训练,然后再请求另一个架构。 通过 [NNI 多阶段 Experiment](./multiPhase.md) 来支持。 在 enas_mode 中,编译后的 Trial 代码会构建完整的图形(而不是子图),会接收所选择的架构,并在完整的图形上对此体系结构进行小型的批处理训练,然后再请求另一个架构。 通过 [NNI 多阶段 Experiment](./MultiPhase.md) 来支持。
具体来说,使用 TensorFlow 的 Trial,通过 TensorFlow 变量来作为信号,并使用 TensorFlow 的条件函数来控制搜索空间(全图)来提高灵活性。这意味着根据这些信号,可以变为不同的多个子图。 [这是 enas_mode]() 的示例。 具体来说,使用 TensorFlow 的 Trial,通过 TensorFlow 变量来作为信号,并使用 TensorFlow 的条件函数来控制搜索空间(全图)来提高灵活性。这意味着根据这些信号,可以变为不同的多个子图。 [](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nas/enas_mode)是 enas_mode 的示例。
<a name="OneshotMode"></a> <a name="OneshotMode"></a>
...@@ -178,7 +178,7 @@ for _ in range(num): ...@@ -178,7 +178,7 @@ for _ in range(num):
![](../../img/oneshot_mode.png) ![](../../img/oneshot_mode.png)
[论文](http://proceedings.mlr.press/v80/bender18a/bender18a.pdf)中的建议,应该为每层的输入实现 Dropout 方法。 当 0 < r < 1 是模型超参的取值范围(默认值为 0.01),k 是某层可选超参的数量,Dropout 比率设为 r^(1/k)。 fan-in 越高,每个输入被丢弃的可能性越大。 但某层丢弃所有可选输入的概率是常数,与 fan-in 无关。 假设 r = 0.05。 如果某层有 k = 2 个可选的输入,每个输入都会以独立的 0.051/2 ≈ 0.22 的概率被丢弃,也就是说有 0.78 的概率被保留。 如果某层有 k = 7 个可选的输入,每个输入都会以独立的 0.051/7 ≈ 0.65 的概率被丢弃,也就是说有 0.35 的概率被保留。 在这两种情况下,丢弃所有可选输入的概率是 5%。 候选操作的输出会通过同样的方法被丢弃。 [这里]()是 oneshot_mode 的示例。 [论文](http://proceedings.mlr.press/v80/bender18a/bender18a.pdf)中的建议,应该为每层的输入实现 Dropout 方法。 当 0 < r < 1 是模型超参的取值范围(默认值为 0.01),k 是某层可选超参的数量,Dropout 比率设为 r^(1/k)。 fan-in 越高,每个输入被丢弃的可能性越大。 但某层丢弃所有可选输入的概率是常数,与 fan-in 无关。 假设 r = 0.05。 如果某层有 k = 2 个可选的输入,每个输入都会以独立的 0.051/2 ≈ 0.22 的概率被丢弃,也就是说有 0.78 的概率被保留。 如果某层有 k = 7 个可选的输入,每个输入都会以独立的 0.051/7 ≈ 0.65 的概率被丢弃,也就是说有 0.35 的概率被保留。 在这两种情况下,丢弃所有可选输入的概率是 5%。 候选操作的输出会通过同样的方法被丢弃。 [这里](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nas/oneshot_mode)是 oneshot_mode 的示例。
<a name="DartsMode"></a> <a name="DartsMode"></a>
...@@ -188,7 +188,7 @@ for _ in range(num): ...@@ -188,7 +188,7 @@ for _ in range(num):
![](../../img/darts_mode.png) ![](../../img/darts_mode.png)
`nni.training_update` 中,TensorFlow 的 MomentumOptimizer 通过传递的 `loss``feed_dict` 来训练架构权重。 [这是 darts_mode]() 的示例。 `nni.training_update` 中,TensorFlow 的 MomentumOptimizer 通过传递的 `loss``feed_dict` 来训练架构权重。 [](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nas/darts_mode)是 darts_mode 的示例。
### [**待实现**] One-Shot NAS 的多 Trial 任务。 ### [**待实现**] One-Shot NAS 的多 Trial 任务。
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
*匿名作者* *匿名作者*
超参优化算法在几个问题上的对比。 超参优化算法(HPO)在几个问题上的对比。
超参数优化算法如下: 超参数优化算法如下:
......
# 使用 NNI 为 SPTAG 自动调参
[SPTAG](https://github.com/microsoft/SPTAG) (Space Partition Tree And Graph) 是大规模向量的最近邻搜索的工具,由[微软研究院(MSR)](https://www.msra.cn/)[微软必应团队](https://www.bing.com/)联合发布。
此工具假设样本可以表示为向量,并且能通过 L2 或余弦算法来比较距离。 输入一个查询向量,会返回与其 L2 或余弦距离最小的一组向量。 SPTAG 提供了两种方法:kd-tree 与其的相关近邻图 (SPTAG-KDT),以及平衡 k-means 树与其的相关近邻图 (SPTAG-BKT)。 SPTAG-KDT 在索引构建效率上较好,而 SPTAG-BKT 在搜索高维度数据的精度上较好。
在 SPTAG中,有几十个参数可以根据特定的场景或数据集进行调优。 NNI 是用来自动化调优这些参数的绝佳工具。 SPTAG 的作者尝试了使用 NNI 来进行自动调优,并轻松找到了性能较好的参数组合,并在 SPTAG [文档](https://github.com/microsoft/SPTAG/blob/master/docs/Parameters.md)中进行了分享。 参考此文档了解详细教程。
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment