"results/scale_load.py" did not exist on "90b2ec87ea64eb93f3cec0fb31e8d39a74a1a10f"
Unverified Commit 39782f12 authored by xuehui's avatar xuehui Committed by GitHub
Browse files

Update all doc structure (#1285)

* update readme in ga_squad

* update readme

* fix typo

* Update README.md

* Update README.md

* Update README.md

* update readme

* update

* fix path

* update reference

* fix bug in config file

* update nni_arch_overview.png

* update

* update

* update

* update home page

* update default value of metis tuner

* fix broken link in CommunitySharings

* update docs about nested search space

* update docs

* rename cascding to nested

* fix broken link

* update

* update issue link

* fix typo

* update evaluate parameters from GMM

* refine code

* fix optimized mode bug

* update import warning

* update warning

* update optimized mode

* first commit for update doc structure

* mv the localmode and remotemode to traningservice

* update
parent b7cd20e6
...@@ -3,6 +3,6 @@ Batch Tuner on NNI ...@@ -3,6 +3,6 @@ Batch Tuner on NNI
## Batch Tuner ## Batch Tuner
Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in [search space spec](SearchSpaceSpec.md). Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in [search space spec](../Tutorial/SearchSpaceSpec.md).
Suggested scenario: If the configurations you want to try have been decided, you can list them in SearchSpace file (using choice) and run them using batch tuner. Suggested scenario: If the configurations you want to try have been decided, you can list them in SearchSpace file (using choice) and run them using batch tuner.
...@@ -12,7 +12,7 @@ Below we divide introduction of the BOHB process into two parts: ...@@ -12,7 +12,7 @@ Below we divide introduction of the BOHB process into two parts:
We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](HyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below. We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](HyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below.
![](../img/bohb_1.png) ![](../../img/bohb_1.png)
### BO (Bayesian Optimization) ### BO (Bayesian Optimization)
...@@ -20,11 +20,11 @@ The BO part of BOHB closely resembles TPE, with one major difference: we opted f ...@@ -20,11 +20,11 @@ The BO part of BOHB closely resembles TPE, with one major difference: we opted f
Tree Parzen Estimator(TPE): uses a KDE(kernel density estimator) to model the densities. Tree Parzen Estimator(TPE): uses a KDE(kernel density estimator) to model the densities.
![](../img/bohb_2.png) ![](../../img/bohb_2.png)
To fit useful KDEs, we require a minimum number of data points Nmin; this is set to d + 1 for our experiments, where d is the number of hyperparameters. To build a model as early as possible, we do not wait until Nb = |Db|, the number of observations for budget b, is large enough to satisfy q · Nb ≥ Nmin. Instead, after initializing with Nmin + 2 random configurations, we choose the To fit useful KDEs, we require a minimum number of data points Nmin; this is set to d + 1 for our experiments, where d is the number of hyperparameters. To build a model as early as possible, we do not wait until Nb = |Db|, the number of observations for budget b, is large enough to satisfy q · Nb ≥ Nmin. Instead, after initializing with Nmin + 2 random configurations, we choose the
![](../img/bohb_3.png) ![](../../img/bohb_3.png)
best and worst configurations, respectively, to model the two densities. best and worst configurations, respectively, to model the two densities.
...@@ -32,14 +32,14 @@ Note that we alse sample a constant fraction named **random fraction** of the co ...@@ -32,14 +32,14 @@ Note that we alse sample a constant fraction named **random fraction** of the co
## 2. Workflow ## 2. Workflow
![](../img/bohb_6.jpg) ![](../../img/bohb_6.jpg)
This image shows the workflow of BOHB. Here we set max_budget = 9, min_budget = 1, eta = 3, others as default. In this case, s_max = 2, so we will continuesly run the {s=2, s=1, s=0, s=2, s=1, s=0, ...} cycle. In each stage of SuccessiveHalving (the orange box), we will pick the top 1/eta configurations and run them again with more budget, repeated SuccessiveHalving stage until the end of this iteration. At the same time, we collect the configurations, budgets and final metrics of each trial, and use this to build a multidimensional KDEmodel with the key "budget". This image shows the workflow of BOHB. Here we set max_budget = 9, min_budget = 1, eta = 3, others as default. In this case, s_max = 2, so we will continuesly run the {s=2, s=1, s=0, s=2, s=1, s=0, ...} cycle. In each stage of SuccessiveHalving (the orange box), we will pick the top 1/eta configurations and run them again with more budget, repeated SuccessiveHalving stage until the end of this iteration. At the same time, we collect the configurations, budgets and final metrics of each trial, and use this to build a multidimensional KDEmodel with the key "budget".
Multidimensional KDE is used to guide the selection of configurations for the next iteration. Multidimensional KDE is used to guide the selection of configurations for the next iteration.
The way of sampling procedure(use Multidimensional KDE to guide the selection) is summarized by the pseudocode below. The way of sampling procedure(use Multidimensional KDE to guide the selection) is summarized by the pseudocode below.
![](../img/bohb_4.png) ![](../../img/bohb_4.png)
## 3. Usage ## 3. Usage
...@@ -96,6 +96,6 @@ code implementation: [examples/trials/mnist-advisor](https://github.com/Microsof ...@@ -96,6 +96,6 @@ code implementation: [examples/trials/mnist-advisor](https://github.com/Microsof
We chose BOHB to build CNN on the MNIST dataset. The following is our experimental final results: We chose BOHB to build CNN on the MNIST dataset. The following is our experimental final results:
![](../img/bohb_5.png) ![](../../img/bohb_5.png)
More experimental result can be found in the [reference paper](https://arxiv.org/abs/1807.01774), we can see that BOHB makes good use of previous results, and has a balance trade-off in exploration and exploitation. More experimental result can be found in the [reference paper](https://arxiv.org/abs/1807.01774), we can see that BOHB makes good use of previous results, and has a balance trade-off in exploration and exploitation.
\ No newline at end of file
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
NNI provides state-of-the-art tuning algorithm as our built-in tuners and makes them easy to use. Below is the brief summary of NNI currently built-in tuners: NNI provides state-of-the-art tuning algorithm as our built-in tuners and makes them easy to use. Below is the brief summary of NNI currently built-in tuners:
Note: Click the **Tuner's name** to get the Tuner's installation requirements, suggested scenario and using example. The link for a detailed description of the algorithm is at the end of the suggested scenario of each tuner. Here is an [article](./CommunitySharings/HpoComparision.md) about the comparison of different Tuners on several problems. Note: Click the **Tuner's name** to get the Tuner's installation requirements, suggested scenario and using example. The link for a detailed description of the algorithm is at the end of the suggested scenario of each tuner. Here is an [article](../CommunitySharings/HpoComparision.md) about the comparison of different Tuners on several problems.
Currently we support the following algorithms: Currently we support the following algorithms:
...@@ -211,7 +211,7 @@ The search space file including the high-level key `combine_params`. The type of ...@@ -211,7 +211,7 @@ The search space file including the high-level key `combine_params`. The type of
**Suggested scenario** **Suggested scenario**
Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](./SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.** Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](../Tutorial/SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.**
It is suggested when search space is small, it is feasible to exhaustively sweeping the whole search space. [Detailed Description](./GridsearchTuner.md) It is suggested when search space is small, it is feasible to exhaustively sweeping the whole search space. [Detailed Description](./GridsearchTuner.md)
......
...@@ -3,4 +3,4 @@ Grid Search on NNI ...@@ -3,4 +3,4 @@ Grid Search on NNI
## Grid Search ## Grid Search
Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.** Grid Search performs an exhaustive searching through a manually specified subset of the hyperparameter space defined in the searchspace file. Note that the only acceptable types of search space are `choice`, `quniform`, `qloguniform`. **The number `q` in `quniform` and `qloguniform` has special meaning (different from the spec in [search space spec](../Tutorial/SearchSpaceSpec.md)). It means the number of values that will be sampled evenly from the range `low` and `high`.**
\ No newline at end of file \ No newline at end of file
...@@ -5,4 +5,4 @@ SMAC Tuner on NNI ...@@ -5,4 +5,4 @@ SMAC Tuner on NNI
[SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by nni is a wrapper on [the SMAC3 github repo](https://github.com/automl/SMAC3). [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by nni is a wrapper on [the SMAC3 github repo](https://github.com/automl/SMAC3).
Note that SMAC on nni only supports a subset of the types in [search space spec](SearchSpaceSpec.md), including `choice`, `randint`, `uniform`, `loguniform`, `quniform(q=1)`. Note that SMAC on nni only supports a subset of the types in [search space spec](../Tutorial/SearchSpaceSpec.md), including `choice`, `randint`, `uniform`, `loguniform`, `quniform(q=1)`.
\ No newline at end of file \ No newline at end of file
...@@ -84,10 +84,10 @@ h_pooling = max_pool(hidden_layer, pool_size) ...@@ -84,10 +84,10 @@ h_pooling = max_pool(hidden_layer, pool_size)
`'''@nni.report_intermediate_result(metrics)'''` `'''@nni.report_intermediate_result(metrics)'''`
`@nni.report_intermediate_result` is used to report intermediate result, whose usage is the same as `nni.report_intermediate_result` in [Trials.md](Trials.md) `@nni.report_intermediate_result` is used to report intermediate result, whose usage is the same as `nni.report_intermediate_result` in [Trials.md](../TrialExample/Trials.md)
### 4. Annotate final result ### 4. Annotate final result
`'''@nni.report_final_result(metrics)'''` `'''@nni.report_final_result(metrics)'''`
`@nni.report_final_result` is used to report the final result of the current trial, whose usage is the same as `nni.report_final_result` in [Trials.md](Trials.md) `@nni.report_final_result` is used to report the final result of the current trial, whose usage is the same as `nni.report_final_result` in [Trials.md](../TrialExample/Trials.md)
...@@ -6,7 +6,7 @@ Firstly, if you are unsure or afraid of anything, just ask or submit the issue o ...@@ -6,7 +6,7 @@ Firstly, if you are unsure or afraid of anything, just ask or submit the issue o
However, for those individuals who want a bit more guidance on the best way to contribute to the project, read on. This document will cover all the points we're looking for in your contributions, raising your chances of quickly merging or addressing your contributions. However, for those individuals who want a bit more guidance on the best way to contribute to the project, read on. This document will cover all the points we're looking for in your contributions, raising your chances of quickly merging or addressing your contributions.
Looking for a quickstart, get acquainted with our [Get Started](./QuickStart.md) guide. Looking for a quickstart, get acquainted with our [Get Started](QuickStart.md) guide.
There are a few simple guidelines that you need to follow before providing your hacks. There are a few simple guidelines that you need to follow before providing your hacks.
......
...@@ -188,9 +188,9 @@ machineList: ...@@ -188,9 +188,9 @@ machineList:
* __remote__ submit trial jobs to remote ubuntu machines, and __machineList__ field should be filed in order to set up SSH connection to remote machine. * __remote__ submit trial jobs to remote ubuntu machines, and __machineList__ field should be filed in order to set up SSH connection to remote machine.
* __pai__ submit trial jobs to [OpenPai](https://github.com/Microsoft/pai) of Microsoft. For more details of pai configuration, please reference [PAIMOdeDoc](./PaiMode.md) * __pai__ submit trial jobs to [OpenPai](https://github.com/Microsoft/pai) of Microsoft. For more details of pai configuration, please reference [PAIMOdeDoc](../TrainingService/PaiMode.md)
* __kubeflow__ submit trial jobs to [kubeflow](https://www.kubeflow.org/docs/about/kubeflow/), NNI support kubeflow based on normal kubernetes and [azure kubernetes](https://azure.microsoft.com/en-us/services/kubernetes-service/). * __kubeflow__ submit trial jobs to [kubeflow](https://www.kubeflow.org/docs/about/kubeflow/), NNI support kubeflow based on normal kubernetes and [azure kubernetes](https://azure.microsoft.com/en-us/services/kubernetes-service/). Detail please reference [KubeflowDoc](../TrainingService/KubeflowMode.md)
* __searchSpacePath__ * __searchSpacePath__
* Description * Description
...@@ -209,7 +209,7 @@ machineList: ...@@ -209,7 +209,7 @@ machineList:
* __multiPhase__ * __multiPhase__
* Description * Description
__multiPhase__ enable [multi-phase experiment](./MultiPhase.md). __multiPhase__ enable [multi-phase experiment](../AdvancedFeature/MultiPhase.md).
* __multiThread__ * __multiThread__
* Description * Description
......
...@@ -57,7 +57,7 @@ Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails. ...@@ -57,7 +57,7 @@ Dispatcher fails. Usually, for some new users of NNI, it means that tuner fails.
Take the later situation as an example. If you write a customized tuner who's \_\_init\_\_ function has an argument called `optimize_mode`, which you do not provide in your configuration file, NNI will fail to run your tuner so the experiment fails. You can see errors in the webUI like: Take the later situation as an example. If you write a customized tuner who's \_\_init\_\_ function has an argument called `optimize_mode`, which you do not provide in your configuration file, NNI will fail to run your tuner so the experiment fails. You can see errors in the webUI like:
![](../img/dispatcher_error.jpg) ![](../../img/dispatcher_error.jpg)
Here we can see it is a dispatcher error. So we can check dispatcher's log, which might look like: Here we can see it is a dispatcher error. So we can check dispatcher's log, which might look like:
...@@ -82,7 +82,7 @@ It means your trial code (which is run by NNI) fails. This kind of error is stro ...@@ -82,7 +82,7 @@ It means your trial code (which is run by NNI) fails. This kind of error is stro
A common example of this would be run the mnist example without installing tensorflow. Surely there is an Import Error (that is, not installing tensorflow but trying to import it in your trial code) and thus every trial fails. A common example of this would be run the mnist example without installing tensorflow. Surely there is an Import Error (that is, not installing tensorflow but trying to import it in your trial code) and thus every trial fails.
![](../img/trial_error.jpg) ![](../../img/trial_error.jpg)
As it shows, every trial has a log path, where you can find trial'log and stderr. As it shows, every trial has a log path, where you can find trial'log and stderr.
...@@ -47,7 +47,7 @@ After you prepare NNI's environment, you could start a new experiment using `nni ...@@ -47,7 +47,7 @@ After you prepare NNI's environment, you could start a new experiment using `nni
## Using docker in remote platform ## Using docker in remote platform
NNI support starting experiments in [remoteTrainingService](RemoteMachineMode.md), and run trial jobs in remote machines. As docker could start an independent Ubuntu system as SSH server, docker container could be used as the remote machine in NNI's remot mode. NNI support starting experiments in [remoteTrainingService](../TrainingService/RemoteMachineMode.md), and run trial jobs in remote machines. As docker could start an independent Ubuntu system as SSH server, docker container could be used as the remote machine in NNI's remot mode.
### Step 1: Setting docker environment ### Step 1: Setting docker environment
...@@ -78,7 +78,7 @@ If you use your own docker image as remote server, please make sure that this im ...@@ -78,7 +78,7 @@ If you use your own docker image as remote server, please make sure that this im
### Step3: Run NNI experiments ### Step3: Run NNI experiments
You could set your config file as remote platform, and setting the `machineList` configuration to connect your docker SSH server, [refer](RemoteMachineMode.md). Note that you should set correct `port`,`username` and `passwd` or `sshKeyPath` of your host machine. You could set your config file as remote platform, and setting the `machineList` configuration to connect your docker SSH server, [refer](../TrainingService/RemoteMachineMode.md). Note that you should set correct `port`,`username` and `passwd` or `sshKeyPath` of your host machine.
`port:` The host machine's port, mapping to docker's SSH port. `port:` The host machine's port, mapping to docker's SSH port.
......
...@@ -93,8 +93,8 @@ Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is ...@@ -93,8 +93,8 @@ Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is
* [Use NNIBoard](WebUI.md) * [Use NNIBoard](WebUI.md)
* [Define search space](SearchSpaceSpec.md) * [Define search space](SearchSpaceSpec.md)
* [Config an experiment](ExperimentConfig.md) * [Config an experiment](ExperimentConfig.md)
* [How to run an experiment on local (with multiple GPUs)?](LocalMode.md) * [How to run an experiment on local (with multiple GPUs)?](../TrainingService/LocalMode.md)
* [How to run an experiment on multiple machines?](RemoteMachineMode.md) * [How to run an experiment on multiple machines?](../TrainingService/RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PaiMode.md) * [How to run an experiment on OpenPAI?](../TrainingService/PaiMode.md)
* [How to run an experiment on Kubernetes through Kubeflow?](KubeflowMode.md) * [How to run an experiment on Kubernetes through Kubeflow?](../TrainingService/KubeflowMode.md)
* [How to run an experiment on Kubernetes through FrameworkController?](FrameworkControllerMode.md) * [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment