Unverified Commit 704b50e2 authored by SparkSnail's avatar SparkSnail Committed by GitHub
Browse files

Merge pull request #200 from microsoft/master

merge master
parents 755ac5f0 3a6d1372
...@@ -10,10 +10,10 @@ ...@@ -10,10 +10,10 @@
NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。 NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包。 它通过多种调优的算法来搜索最好的神经网络结构和(或)超参,并支持单机、本地多机、云等不同的运行环境。
### **NNI [v0.9](https://github.com/Microsoft/nni/releases) 已发布!** ### **NNI [v0.9](https://github.com/Microsoft/nni/releases) 已发布! &nbsp;[<img width="48" src="docs/img/release_icon.png" />](#nni-released-reminder)**
<p align="center"> <p align="center">
<a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a> <a href="#nni-has-been-released"><img src="docs/img/overview.svg" /></a>
</p> </p>
<table> <table>
...@@ -28,11 +28,11 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -28,11 +28,11 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
<img src="docs/img/bar.png"/> <img src="docs/img/bar.png"/>
</td> </td>
<td> <td>
<b>训练服务</b> <b>训练平台</b>
<img src="docs/img/bar.png"/> <img src="docs/img/bar.png"/>
</td> </td>
</tr> </tr>
<tr/> </tr>
<tr valign="top"> <tr valign="top">
<td> <td>
<ul> <ul>
...@@ -46,36 +46,42 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包 ...@@ -46,36 +46,42 @@ NNI (Neural Network Intelligence) 是自动机器学习(AutoML)的工具包
<li>Theano</li> <li>Theano</li>
</ul> </ul>
</td> </td>
<td> <td align="left">
<a href="docs/zh_CN/BuiltinTuner.md">Tuner(调参器)</a> <a href="docs/en_US/Tuner/BuiltinTuner.md">Tuner(调参器)</a>
<br />
<ul> <ul>
<li><a href="docs/zh_CN/BuiltinTuner.md#TPE">TPE</a></li> <b style="margin-left:-20px">通用 Tuner</b>
<li><a href="docs/zh_CN/BuiltinTuner.md#Random">Random Search(随机搜索)</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#Random">Random Search(随机搜索)</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#Anneal">Anneal(退火算法)</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#Evolution">Naïve Evolution(进化算法)</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#Evolution">Naive Evolution(进化算法)</a></li> <b style="margin-left:-20px">超参 Tuner</b>
<li><a href="docs/zh_CN/BuiltinTuner.md#SMAC">SMAC</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#TPE">TPE</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#Batch">Batch(批处理)</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#Anneal">Anneal(退火算法)</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#GridSearch">Grid Search(遍历搜索)</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#SMAC">SMAC</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#Hyperband">Hyperband</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#Batch">Batch(批处理)</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#GridSearch">Grid Search(遍历搜索)</a></li>
<li><a href="examples/tuners/enas_nni/README_zh_CN.md">ENAS</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#Hyperband">Hyperband</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#BOHB">BOHB</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#BOHB">BOHB</a></li>
<li><a href="docs/zh_CN/BuiltinTuner.md#GPTuner">GP Tuner</a></li> <li><a href="docs/en_US/Tuner/BuiltinTuner.md#GPTuner">GP Tuner</a></li>
<b style="margin-left:-20px">网络结构 Tuner</b>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li>
<li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li>
</ul> </ul>
<a href="docs/zh_CN/BuiltinAssessor.md">Assessor(评估器)</a> <a href="docs/en_US/Assessor/BuiltinAssessor.md">Assessor(评估器)</a>
<ul> <ul>
<li><a href="docs/zh_CN/BuiltinAssessor.md#Medianstop">Median Stop</a></li> <li><a href="docs/en_US/Assessor/BuiltinAssessor.md#Medianstop">Median Stop(中位数终止)</a></li>
<li><a href="docs/zh_CN/BuiltinAssessor.md#Curvefitting">Curve Fitting</a></li> <li><a href="docs/en_US/Assessor/BuiltinAssessor.md#Curvefitting">Curve Fitting(曲线拟合)</a></li>
</ul> </ul>
</td> </td>
<td> <td>
<ul> <ul>
<li><a href="docs/zh_CN/LocalMode.md">本地计算机</a></li> <li><a href="docs/en_US/TrainingService/LocalMode.md">本机</a></li>
<li><a href="docs/zh_CN/RemoteMachineMode.md">远程计算机</a></li> <li><a href="docs/en_US/TrainingService/RemoteMachineMode.md">远程计算机</a></li>
<li><a href="docs/zh_CN/PaiMode.md">OpenPAI</a></li> <li><b>基于 Kubernetes 的平台</b></li>
<li><a href="docs/zh_CN/KubeflowMode.md">Kubeflow</a></li> <ul><li><a href="docs/en_US/TrainingService/PaiMode.md">OpenPAI</a></li>
<li><a href="docs/zh_CN/FrameworkControllerMode.md">基于 Kubernetes(AKS 等等)的 FrameworkController</a></li> <li><a href="docs/en_US/TrainingService/KubeflowMode.md">Kubeflow</a></li>
<li><a href="docs/en_US/TrainingService/FrameworkControllerMode.md">基于 Kubernetes(AKS 等)的 FrameworkController</a></li>
</ul>
</ul> </ul>
</td> </td>
</tr> </tr>
...@@ -122,7 +128,7 @@ python -m pip install --upgrade nni ...@@ -122,7 +128,7 @@ python -m pip install --upgrade nni
* 如果需要将 NNI 安装到自己的 home 目录中,可使用 `--user`,这样也不需要任何特殊权限。 * 如果需要将 NNI 安装到自己的 home 目录中,可使用 `--user`,这样也不需要任何特殊权限。
* 目前,Windows 上的 NNI 支持本机,远程和 OpenPAI 模式。 强烈推荐使用 Anaconda 或 Miniconda 在 Windows 上安装 NNI。 * 目前,Windows 上的 NNI 支持本机,远程和 OpenPAI 模式。 强烈推荐使用 Anaconda 或 Miniconda 在 Windows 上安装 NNI。
* 如果遇到如`Segmentation fault` 这样的任何错误请参考[常见问题](docs/zh_CN/FAQ.md) * 如果遇到如`Segmentation fault` 这样的任何错误请参考[常见问题](docs/zh_CN/Tutorial/FAQ.md)
**通过源代码安装** **通过源代码安装**
...@@ -133,7 +139,7 @@ Linux 和 macOS ...@@ -133,7 +139,7 @@ Linux 和 macOS
*`python >= 3.5` 的环境中运行命令: `git``wget`,确保安装了这两个组件。 *`python >= 3.5` 的环境中运行命令: `git``wget`,确保安装了这两个组件。
```bash ```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git git clone -b v0.9 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
``` ```
...@@ -143,14 +149,14 @@ Windows ...@@ -143,14 +149,14 @@ Windows
*`python >=3.5` 的环境中运行命令: `git``PowerShell`,确保安装了这两个组件。 *`python >=3.5` 的环境中运行命令: `git``PowerShell`,确保安装了这两个组件。
```bash ```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git git clone -b v0.9 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell -ExecutionPolicy Bypass -file install.ps1 powershell -ExecutionPolicy Bypass -file install.ps1
``` ```
参考[安装 NNI](docs/zh_CN/Installation.md) 了解系统需求。 参考[安装 NNI](docs/zh_CN/Tutorial/Installation.md) 了解系统需求。
Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/NniOnWindows.md) Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/Tutorial/NniOnWindows.md)
**验证安装** **验证安装**
...@@ -159,7 +165,7 @@ Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/NniOnWindows.md)。 ...@@ -159,7 +165,7 @@ Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/NniOnWindows.md)。
* 通过克隆源代码下载示例。 * 通过克隆源代码下载示例。
```bash ```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git git clone -b v0.9 https://github.com/Microsoft/nni.git
``` ```
Linux 和 macOS Linux 和 macOS
...@@ -207,7 +213,7 @@ You can use these commands to get more information about the experiment ...@@ -207,7 +213,7 @@ You can use these commands to get more information about the experiment
----------------------------------------------------------------------- -----------------------------------------------------------------------
``` ```
* 在浏览器中打开 `Web UI url`,可看到下图的 Experiment 详细信息,以及所有的 Trial 任务。 查看[这里](docs/zh_CN/WebUI.md)的更多页面。 * 在浏览器中打开 `Web UI url`,可看到下图的 Experiment 详细信息,以及所有的 Trial 任务。 查看[这里](docs/zh_CN/Tutorial/WebUI.md)的更多页面。
<table style="border: none"> <table style="border: none">
<th><img src="./docs/img/webui_overview_page.png" alt="drawing" width="395"/></th> <th><img src="./docs/img/webui_overview_page.png" alt="drawing" width="395"/></th>
...@@ -216,43 +222,69 @@ You can use these commands to get more information about the experiment ...@@ -216,43 +222,69 @@ You can use these commands to get more information about the experiment
## **文档** ## **文档**
主要文档都可以在[这里](https://nni.readthedocs.io/cn/latest/Overview.html)找到,文档均从本代码库生成。
点击阅读:
* [NNI 概述](docs/zh_CN/Overview.md) * [NNI 概述](docs/zh_CN/Overview.md)
* [快速入门](docs/zh_CN/QuickStart.md) * [快速入门](docs/en_US/Tutorial/QuickStart.md)
* [贡献](docs/en_US/Tutorial/Contributing.md)
* [示例](docs/en_US/examples.rst)
* [参考](docs/en_US/reference.rst)
* [Web 界面教程](docs/en_US/Tutorial/WebUI.md)
## **入门** ## **入门**
* [安装 NNI](docs/zh_CN/Installation.md) * [安装 NNI](docs/en_US/Tutorial/Installation.md)
* [使用命令行工具 nnictl](docs/zh_CN/Nnictl.md) * [使用命令行工具 nnictl](docs/en_US/Tutorial/Nnictl.md)
* [使用 NNIBoard](docs/zh_CN/WebUI.md) * [使用 NNIBoard](docs/en_US/Tutorial/WebUI.md)
* [如何定义搜索空间](docs/zh_CN/SearchSpaceSpec.md) * [如何定义搜索空间](docs/en_US/Tutorial/SearchSpaceSpec.md)
* [如何编写 Trial 代码](docs/zh_CN/Trials.md) * [如何实现 Trial 代码](docs/en_US/TrialExample/Trials.md)
* [如何选择 Tuner、搜索算法](docs/zh_CN/BuiltinTuner.md) * [如何选择 Tuner、搜索算法](docs/en_US/Tuner/BuiltinTuner.md)
* [配置 Experiment](docs/zh_CN/ExperimentConfig.md) * [配置 Experiment](docs/en_US/Tutorial/ExperimentConfig.md)
* [如何使用 Annotation](docs/zh_CN/Trials.md#nni-python-annotation) * [如何使用 Annotation](docs/en_US/TrialExample/Trials.md#nni-python-annotation)
## **教程** ## **教程**
* [本机运行 Experiment (支持多 GPU 卡)](docs/zh_CN/LocalMode.md) * [ OpenPAI 上运行 Experiment](docs/en_US/TrainingService/PaiMode.md)
* [多机上运行 Experiment](docs/zh_CN/RemoteMachineMode.md) * [ Kubeflow 上运行 Experiment](docs/en_US/TrainingService/KubeflowMode.md)
* [ OpenPAI 上运行 Experiment](docs/zh_CN/PaiMode.md) * [本机运行 Experiment (支持多 GPU 卡)](docs/en_US/TrainingService/LocalMode.md)
* [ Kubeflow 上运行 Experiment](docs/zh_CN/KubeflowMode.md) * [多机上运行 Experiment](docs/en_US/TrainingService/RemoteMachineMode.md)
* [尝试不同的 Tuner](docs/zh_CN/tuners.rst) * [尝试不同的 Tuner](docs/en_US/Tuner/BuiltinTuner.md)
* [尝试不同的 Assessor](docs/zh_CN/assessors.rst) * [尝试不同的 Assessor](docs/en_US/Assessor/BuiltinAssessor.md)
* [实现自定义 Tuner](docs/zh_CN/CustomizeTuner.md) * [实现自定义 Tuner](docs/en_US/Tuner/CustomizeTuner.md)
* [实现自定义 Assessor](docs/zh_CN/CustomizeAssessor.md) * [实现自定义 Assessor](docs/en_US/Assessor/CustomizeAssessor.md)
* [使用进化算法为阅读理解任务找到好模型](examples/trials/ga_squad/README_zh_CN.md) * [使用进化算法为阅读理解任务找到好模型](docs/en_US/TrialExample/SquadEvolutionExamples.md)
## **贡献** ## **贡献**
欢迎贡献代码或提交建议,可在 [GitHub issues](https://github.com/Microsoft/nni/issues) 跟踪需求和 Bug。 非常欢迎通过各种方式参与此项目,例如:
* 审查[源代码改动](https://github.com/microsoft/nni/pulls)
* 审查[文档](https://github.com/microsoft/nni/tree/master/docs)中从拼写错误到新内容的任何内容,并提交拉取请求。
* 找到标有 ['good first issue'](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)['help-wanted'](https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) 标签的 Issue。这些都是简单的 Issue,新的贡献者可以从这些问题开始。
在提交代码前,需要遵循以下的简单准则:
* [如何调试](docs/en_US/Tutorial/HowToDebug.md)
* [代码风格和命名约定](docs/en_US/Tutorial/Contributing.md)
* 如何设置 [NNI 开发环境](docs/zh_CN/Tutorial/SetupNniDeveloperEnvironment.md)
* 查看[贡献说明](docs/en_US/Tutorial/Contributing.md)并熟悉 NNI 的代码贡献指南
## **外部代码库**
下面是一些贡献者为 NNI 提供的使用示例 谢谢可爱的贡献者! 欢迎越来越多的人加入我们!
推荐新贡献者从标有 **good first issue** 的简单需求开始。 * 在 NNI 中运行 [ENAS](examples/tuners/enas_nni/README_zh_CN.md)
* 在 NNI 中运行 [神经网络架构结构搜索](examples/trials/nas_cifar10/README_zh_CN.md)
如要安装 NNI 开发环境,参考:[配置 NNI 开发环境](docs/zh_CN/SetupNniDeveloperEnvironment.md) ## **反馈**
在写代码之前,请查看并熟悉 NNI 代码贡献指南:[贡献](docs/zh_CN/Contributing.md) * [报告 Bug](https://github.com/microsoft/nni/issues/new/choose)
我们正在编写[如何调试](docs/zh_CN/HowToDebug.md) 的页面,欢迎提交建议和问题。 * [请求新功能](https://github.com/microsoft/nni/issues/new/choose).
*[Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 中参与讨论
*[Stack Overflow](https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true) 上使用 nni 的标签提问,或[在 Github 上提交 Issue](https://github.com/microsoft/nni/issues/new/choose)
* 我们正在实现[如何调试](docs/zh_CN/Tutorial/HowToDebug.md)的页面,欢迎提交建议和问题。
## **许可协议** ## **许可协议**
......
...@@ -15,7 +15,7 @@ jobs: ...@@ -15,7 +15,7 @@ jobs:
displayName: 'Install nni toolkit via source code' displayName: 'Install nni toolkit via source code'
- script: | - script: |
python3 -m pip install flake8 --user python3 -m pip install flake8 --user
IGNORE=./tools/nni_annotation/testcase/*:F821,./examples/trials/mnist-nas/mnist.py:F821 IGNORE=./tools/nni_annotation/testcase/*:F821,./examples/trials/mnist-nas/*/mnist*.py:F821
python3 -m flake8 . --count --per-file-ignores=$IGNORE --select=E9,F63,F72,F82 --show-source --statistics python3 -m flake8 . --count --per-file-ignores=$IGNORE --select=E9,F63,F72,F82 --show-source --statistics
displayName: 'Run flake8 tests to find Python syntax errors and undefined names' displayName: 'Run flake8 tests to find Python syntax errors and undefined names'
- script: | - script: |
......
files:
- source: '/**/*.[mM][dD]'
ignore:
- '*_%locale_with_underscore%.md'
- /docs
- /%locale_with_underscore%
- '**/ISSUE_TEMPLATE/**'
translation: /%original_path%/%file_name%_%locale_with_underscore%.md
- source: /docs/en_US/**/*
ignore:
- /docs/%locale_with_underscore%/**/*.*
translation: /docs/%locale_with_underscore%/**/%original_file_name%
...@@ -60,7 +60,7 @@ trial: ...@@ -60,7 +60,7 @@ trial:
### Write a tuner that leverages multi-phase: ### Write a tuner that leverages multi-phase:
Before writing a multi-phase tuner, we highly suggest you to go through [Customize Tuner](https://nni.readthedocs.io/en/latest/Customize_Tuner.html). Same as writing a normal tuner, your tuner needs to inherit from `Tuner` class. When you enable multi-phase through configuration (set `multiPhase` to true), your tuner will get an additional parameter `trial_job_id` via tuner's following methods: Before writing a multi-phase tuner, we highly suggest you to go through [Customize Tuner](https://nni.readthedocs.io/en/latest/Tuner/CustomizeTuner.html). Same as writing a normal tuner, your tuner needs to inherit from `Tuner` class. When you enable multi-phase through configuration (set `multiPhase` to true), your tuner will get an additional parameter `trial_job_id` via tuner's following methods:
``` ```
generate_parameters generate_parameters
generate_multiple_parameters generate_multiple_parameters
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
TPE approaches were actually run asynchronously in order to make use of multiple compute nodes and to avoid wasting time waiting for trial evaluations to complete. For the TPE approach, the so-called constant liar approach was used: each time a candidate point x∗ was proposed, a fake fitness evaluation of the y was assigned temporarily, until the evaluation completed and reported the actual loss f(x∗). TPE approaches were actually run asynchronously in order to make use of multiple compute nodes and to avoid wasting time waiting for trial evaluations to complete. For the TPE approach, the so-called constant liar approach was used: each time a candidate point x∗ was proposed, a fake fitness evaluation of the y was assigned temporarily, until the evaluation completed and reported the actual loss f(x∗).
## Introducion and Problems ## Introduction and Problems
### Sequential Model-based Global Optimization ### Sequential Model-based Global Optimization
...@@ -19,7 +19,7 @@ Since calculation of p(y|x) is expensive, TPE approach modeled p(y|x) by p(x|y) ...@@ -19,7 +19,7 @@ Since calculation of p(y|x) is expensive, TPE approach modeled p(y|x) by p(x|y)
![](../../img/parallel_tpe_search_tpe.PNG) ![](../../img/parallel_tpe_search_tpe.PNG)
where l(x) is the density formed by using the observations {x(i)} such that corresponding loss where l(x) is the density formed by using the observations {x(i)} such that corresponding loss
f(x(i)) was less than y∗ and g(x) is the density formed by using the remaining observations. TPE algorithm depends on a y∗ that is larger than the best observed f(x) so that some points can be used to form l(x). The TPE algorithm chooses y∗ to be some quantile γ of the observed y values, so that p(y<`y∗`) = γ, but no specific model for p(y) is necessary. The tree-structured form of l and g makes it easy to draw manycandidates according to l and evaluate them according to g(x)/l(x). On each iteration, the algorithm returns the candidate x∗ with the greatest EI. f(x(i)) was less than y∗ and g(x) is the density formed by using the remaining observations. TPE algorithm depends on a y∗ that is larger than the best observed f(x) so that some points can be used to form l(x). The TPE algorithm chooses y∗ to be some quantile γ of the observed y values, so that p(y<`y∗`) = γ, but no specific model for p(y) is necessary. The tree-structured form of l and g makes it easy to draw many candidates according to l and evaluate them according to g(x)/l(x). On each iteration, the algorithm returns the candidate x∗ with the greatest EI.
Here is a simulation of the TPE algorithm in a two-dimensional search space. The difference of background color represents different values. It can be seen that TPE combines exploration and exploitation very well. (Black indicates the points of this round samples, and yellow indicates the points has been taken in the history.) Here is a simulation of the TPE algorithm in a two-dimensional search space. The difference of background color represents different values. It can be seen that TPE combines exploration and exploitation very well. (Black indicates the points of this round samples, and yellow indicates the points has been taken in the history.)
...@@ -69,13 +69,13 @@ We have simulated the method above. The following figure shows the result of usi ...@@ -69,13 +69,13 @@ We have simulated the method above. The following figure shows the result of usi
### Branin-Hoo ### Branin-Hoo
The four optimization strtigeies presented in the last section are now complared on the Branin-Hoo function which is a classical test-case in global optimization. The four optimization strategies presented in the last section are now compared on the Branin-Hoo function which is a classical test-case in global optimization.
![](../../img/parallel_tpe_search_branin.PNG) ![](../../img/parallel_tpe_search_branin.PNG)
The recommended values of a, b, c, r, s and t are: a = 1, b = 5.1 ⁄ (4π2), c = 5 ⁄ π, r = 6, s = 10 and t = 1 ⁄ (8π). This function has three global minimizers(-3.14, 12.27), (3.14, 2.27), (9.42, 2.47). The recommended values of a, b, c, r, s and t are: a = 1, b = 5.1 ⁄ (4π2), c = 5 ⁄ π, r = 6, s = 10 and t = 1 ⁄ (8π). This function has three global minimizers(-3.14, 12.27), (3.14, 2.27), (9.42, 2.47).
Next is the comparaison of the q-EI associated with the q first points (q ∈ [1,10]) given by the constant liar strategies (min and max), 2000 q-points designs uniformly drawn for every q, and 2000 q-points LHS designs taken at random for every q. Next is the comparison of the q-EI associated with the q first points (q ∈ [1,10]) given by the constant liar strategies (min and max), 2000 q-points designs uniformly drawn for every q, and 2000 q-points LHS designs taken at random for every q.
![](../../img/parallel_tpe_search_result.PNG) ![](../../img/parallel_tpe_search_result.PNG)
......
...@@ -41,11 +41,11 @@ There are 10 types to express your search space as follows: ...@@ -41,11 +41,11 @@ There are 10 types to express your search space as follows:
* `@nni.variable(nni.uniform(low, high),name=variable)` * `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high. Which means the variable value is a value uniformly between low and high.
* `@nni.variable(nni.quniform(low, high, q),name=variable)` * `@nni.variable(nni.quniform(low, high, q),name=variable)`
Which means the variable value is a value like round(uniform(low, high) / q) * q Which means the variable value is a value like clip(round(uniform(low, high) / q) * q, low, high), where the clip operation is used to constraint the generated value in the bound.
* `@nni.variable(nni.loguniform(low, high),name=variable)` * `@nni.variable(nni.loguniform(low, high),name=variable)`
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed. Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
* `@nni.variable(nni.qloguniform(low, high, q),name=variable)` * `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q Which means the variable value is a value like clip(round(loguniform(low, high) / q) * q, low, high), where the clip operation is used to constraint the generated value in the bound.
* `@nni.variable(nni.normal(mu, sigma),name=variable)` * `@nni.variable(nni.normal(mu, sigma),name=variable)`
Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma.
* `@nni.variable(nni.qnormal(mu, sigma, q),name=variable)` * `@nni.variable(nni.qnormal(mu, sigma, q),name=variable)`
...@@ -84,10 +84,10 @@ h_pooling = max_pool(hidden_layer, pool_size) ...@@ -84,10 +84,10 @@ h_pooling = max_pool(hidden_layer, pool_size)
`'''@nni.report_intermediate_result(metrics)'''` `'''@nni.report_intermediate_result(metrics)'''`
`@nni.report_intermediate_result` is used to report intermediate result, whose usage is the same as `nni.report_intermediate_result` in [Trials.md](../TrialExample/Trials.md) `@nni.report_intermediate_result` is used to report intermediate result, whose usage is the same as `nni.report_intermediate_result` in the doc of [Write a trial run on NNI](../TrialExample/Trials.md)
### 4. Annotate final result ### 4. Annotate final result
`'''@nni.report_final_result(metrics)'''` `'''@nni.report_final_result(metrics)'''`
`@nni.report_final_result` is used to report the final result of the current trial, whose usage is the same as `nni.report_final_result` in [Trials.md](../TrialExample/Trials.md) `@nni.report_final_result` is used to report the final result of the current trial, whose usage is the same as `nni.report_final_result` in the doc of [Write a trial run on NNI](../TrialExample/Trials.md)
...@@ -45,7 +45,8 @@ All types of sampling strategies and their parameter are listed here: ...@@ -45,7 +45,8 @@ All types of sampling strategies and their parameter are listed here:
* When optimizing, this variable is constrained to a two-sided interval. * When optimizing, this variable is constrained to a two-sided interval.
* {"_type":"quniform","_value":[low, high, q]} * {"_type":"quniform","_value":[low, high, q]}
* Which means the variable value is a value like round(uniform(low, high) / q) * q * Which means the variable value is a value like clip(round(uniform(low, high) / q) * q, low, high), where the clip operation is used to constraint the generated value in the bound. For example, for _value specified as [0, 10, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0]; For _value specified as [2, 10, 5], possible values are [2, 5, 10].
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`. * Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`.
* {"_type":"loguniform","_value":[low, high]} * {"_type":"loguniform","_value":[low, high]}
...@@ -53,7 +54,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -53,7 +54,7 @@ All types of sampling strategies and their parameter are listed here:
* When optimizing, this variable is constrained to be positive. * When optimizing, this variable is constrained to be positive.
* {"_type":"qloguniform","_value":[low, high, q]} * {"_type":"qloguniform","_value":[low, high, q]}
* Which means the variable value is a value like round(loguniform(low, high)) / q) * q * Which means the variable value is a value like clip(round(loguniform(low, high) / q) * q, low, high), where the clip operation is used to constraint the generated value in the bound.
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below. * Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
* {"_type":"normal","_value":[mu, sigma]} * {"_type":"normal","_value":[mu, sigma]}
......
...@@ -4,22 +4,32 @@ ...@@ -4,22 +4,32 @@
Click the tab "Overview". Click the tab "Overview".
* See the experiment trial profile and search space message. * See the experiment trial profile/search space and performanced good trials.
* Support to download the experiment result.
* Support to export nni-manager and dispatcher log file.
* If you have any question, you can click "Feedback" to report it.
* If your experiment have more than 1000 trials, you can change the refresh interval on here.
![](../../img/webui-img/over1.png) ![](../../img/webui-img/over1.png)
* See good performance trials.
![](../../img/webui-img/over2.png) ![](../../img/webui-img/over2.png)
* If your experiment have many trials, you can change the refresh interval on here.
![](../../img/webui-img/refresh-interval.png)
* Support to review and download the experiment result and nni-manager/dispatcher log file from the download.
![](../../img/webui-img/download.png)
* You can click the learn about in the error box to track experiment log message if the experiment's status is error.
![](../../img/webui-img/log-error.png)
![](../../img/webui-img/review-log.png)
* You can click "Feedback" to report it if you have any questions.
## View job default metric ## View job default metric
Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message. * Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message.
![](../../img/accuracy.png) ![](../../img/webui-img/default-metric.png)
* Click the switch named "optimization curve" to see the experiment's optimization curve.
![](../../img/webui-img/best-curve.png)
## View hyper parameter ## View hyper parameter
...@@ -29,24 +39,26 @@ Click the tab "Hyper Parameter" to see the parallel graph. ...@@ -29,24 +39,26 @@ Click the tab "Hyper Parameter" to see the parallel graph.
* Choose two axis to swap its positions * Choose two axis to swap its positions
![](../../img/hyperPara.png) ![](../../img/hyperPara.png)
## View Trial Duration ## View Trial Duration
Click the tab "Trial Duration" to see the bar graph. Click the tab "Trial Duration" to see the bar graph.
![](../../img/trial_duration.png) ![](../../img/trial_duration.png)
## View Trial Intermediate Result Graph ## View Trial Intermediate Result Graph
Click the tab "Intermediate Result" to see the lines graph. Click the tab "Intermediate Result" to see the lines graph.
![](../../img/webui-img/trials_intermeidate.png) ![](../../img/webui-img/trials_intermeidate.png)
The graph has a filter function. You can open the filter button. And then enter your focus point We set a filter function for the intermediate result graph because that the trials may have many intermediate results in the training progress. You need to provide data if you want to use the filter button to see the trend of some trial.
in the scape input. Simultaneously, intermediate result inputs can limit the intermediate's range.
![](../../img/webui-img/filter_intermediate.png) What data should be written in the first input? Maybe you find an intermediate count those trials became better or worse. In other word, it's an important and concerned intermediate count. Just input it into the first input.
After selecting the intermeidate count, you should input your focus metric's range on this intermediate count. Yes, it's the min and max value. Like this picture, I choose the intermeidate count is 9 and the metric's range is 60-80.
As a result, I filter these trials that the metric's range is 20-60 on the 13 intermediate count.
![](../../img/webui-img/filter-intermediate.png)
## View trials status ## View trials status
Click the tab "Trials Detail" to see the status of the all trials. Specifically: Click the tab "Trials Detail" to see the status of the all trials. Specifically:
...@@ -54,26 +66,27 @@ Click the tab "Trials Detail" to see the status of the all trials. Specifically: ...@@ -54,26 +66,27 @@ Click the tab "Trials Detail" to see the status of the all trials. Specifically:
* Trial detail: trial's id, trial's duration, start time, end time, status, accuracy and search space file. * Trial detail: trial's id, trial's duration, start time, end time, status, accuracy and search space file.
![](../../img/webui-img/detail-local.png) ![](../../img/webui-img/detail-local.png)
* The button named "Add column" can select which column to show in the table. If you run an experiment that final result is dict, you can see other keys in the table. You can choose the column "Intermediate count" to watch the trial's progress.
* The button named "Add column" can select which column to show in the table. If you run an experiment that final result is dict, you can see other keys in the table.
![](../../img/webui-img/addColumn.png) ![](../../img/webui-img/addColumn.png)
* If you want to compare some trials, you can select them and then click "Compare" to see the results. * If you want to compare some trials, you can select them and then click "Compare" to see the results.
![](../../img/webui-img/select-trial.png)
![](../../img/webui-img/compare.png) ![](../../img/webui-img/compare.png)
* Support to search for a specific trial by it's id, status, Trial No. and parameters.
![](../../img/webui-img/search-trial.png)
* You can use the button named "Copy as python" to copy trial's parameters. * You can use the button named "Copy as python" to copy trial's parameters.
![](../../img/webui-img/copyParameter.png) ![](../../img/webui-img/copyParameter.png)
* If you run on OpenPAI or Kubeflow platform, you can also see the hdfsLog. * If you run on OpenPAI or Kubeflow platform, you can also see the hdfsLog.
![](../../img/webui-img/detail-pai.png) ![](../../img/webui-img/detail-pai.png)
* Intermediate Result Graph: you can see default and other keys in this graph by click the operation column button.
![](../../img/webui-img/intermediate-btn.png)
![](../../img/webui-img/intermediate.png)
* Kill: you can kill a job that status is running. * Kill: you can kill a job that status is running.
* Support to search for a specific trial.
* Intermediate Result Graph: you can see default and other keys in this graph.
![](../../img/webui-img/intermediate.png) ![](../../img/webui-img/kill-running.png)
![](../../img/webui-img/canceled.png)
docs/img/webui-img/addColumn.png

42 KB | W: | H:

docs/img/webui-img/addColumn.png

36.6 KB | W: | H:

docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
docs/img/webui-img/addColumn.png
  • 2-up
  • Swipe
  • Onion skin
docs/img/webui-img/compare.png

49.9 KB | W: | H:

docs/img/webui-img/compare.png

48.8 KB | W: | H:

docs/img/webui-img/compare.png
docs/img/webui-img/compare.png
docs/img/webui-img/compare.png
docs/img/webui-img/compare.png
  • 2-up
  • Swipe
  • Onion skin
docs/img/webui-img/copyParameter.png

34.9 KB | W: | H:

docs/img/webui-img/copyParameter.png

24.4 KB | W: | H:

docs/img/webui-img/copyParameter.png
docs/img/webui-img/copyParameter.png
docs/img/webui-img/copyParameter.png
docs/img/webui-img/copyParameter.png
  • 2-up
  • Swipe
  • Onion skin
docs/img/webui-img/detail-local.png

21.7 KB | W: | H:

docs/img/webui-img/detail-local.png

20.5 KB | W: | H:

docs/img/webui-img/detail-local.png
docs/img/webui-img/detail-local.png
docs/img/webui-img/detail-local.png
docs/img/webui-img/detail-local.png
  • 2-up
  • Swipe
  • Onion skin
docs/img/webui-img/detail-pai.png

13.8 KB | W: | H:

docs/img/webui-img/detail-pai.png

12.7 KB | W: | H:

docs/img/webui-img/detail-pai.png
docs/img/webui-img/detail-pai.png
docs/img/webui-img/detail-pai.png
docs/img/webui-img/detail-pai.png
  • 2-up
  • Swipe
  • Onion skin
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment