Unverified Commit 4539b4d3 authored by Yan Ni's avatar Yan Ni Committed by GitHub
Browse files

fix warning for homepage build (#680)

* fix lex format for code

* fix doc links

* hide api reference

* delete orphan files

* fix deadlink in FAQ

* remove email address

* fix deadlinks for deleted orphan files
parent d76d379b
...@@ -6,9 +6,9 @@ ...@@ -6,9 +6,9 @@
#### New tuner and assessor supports #### New tuner and assessor supports
* Support [Metis tuner](./HowToChooseTuner.md#MetisTuner) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning. * Support [Metis tuner](./Builtin_Tuner.md#MetisTuner) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning.
* Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS. * Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS.
* Support [Curve fitting assessor](./HowToChooseTuner.md#Curvefitting) for early stop policy using learning curve extrapolation. * Support [Curve fitting assessor](./Builtin_Tuner.md#Curvefitting) for early stop policy using learning curve extrapolation.
* Advanced Support of [Weight Sharing](./AdvancedNAS.md): Enable weight sharing for NAS tuners, currently through NFS. * Advanced Support of [Weight Sharing](./AdvancedNAS.md): Enable weight sharing for NAS tuners, currently through NFS.
#### Training Service Enhancement #### Training Service Enhancement
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
#### New tuner supports #### New tuner supports
* Support [network morphism](./HowToChooseTuner.md#NetworkMorphism) as a new tuner * Support [network morphism](./Builtin_Tuner.md#NetworkMorphism) as a new tuner
#### Training Service improvements #### Training Service improvements
...@@ -64,9 +64,9 @@ ...@@ -64,9 +64,9 @@
* [Kubeflow Training service](./KubeflowMode.md) * [Kubeflow Training service](./KubeflowMode.md)
* Support tf-operator * Support tf-operator
* [Distributed trial example](../examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow * [Distributed trial example](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow
* [Grid search tuner](../src/sdk/pynni/nni/README.md#Grid) * [Grid search tuner](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/README.md#Grid)
* [Hyperband tuner](../src/sdk/pynni/nni/README.md#Hyperband) * [Hyperband tuner](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/README.md#Hyperband)
* Support launch NNI experiment on MAC * Support launch NNI experiment on MAC
* WebUI * WebUI
* UI support for hyperband tuner * UI support for hyperband tuner
...@@ -149,7 +149,7 @@ ...@@ -149,7 +149,7 @@
* Support [OpenPAI](https://github.com/Microsoft/pai) (aka pai) Training Service (See [here](./PAIMode.md) for instructions about how to submit NNI job in pai mode) * Support [OpenPAI](https://github.com/Microsoft/pai) (aka pai) Training Service (See [here](./PAIMode.md) for instructions about how to submit NNI job in pai mode)
* Support training services on pai mode. NNI trials will be scheduled to run on OpenPAI cluster * Support training services on pai mode. NNI trials will be scheduled to run on OpenPAI cluster
* NNI trial's output (including logs and model file) will be copied to OpenPAI HDFS for further debugging and checking * NNI trial's output (including logs and model file) will be copied to OpenPAI HDFS for further debugging and checking
* Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](HowToChooseTuner.md) for instructions about how to use SMAC tuner) * Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](Builtin_Tuner.md) for instructions about how to use SMAC tuner)
* [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO to handle categorical parameters. The SMAC supported by NNI is a wrapper on [SMAC3](https://github.com/automl/SMAC3) * [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO to handle categorical parameters. The SMAC supported by NNI is a wrapper on [SMAC3](https://github.com/automl/SMAC3)
* Support NNI installation on [conda](https://conda.io/docs/index.html) and python virtual environment * Support NNI installation on [conda](https://conda.io/docs/index.html) and python virtual environment
* Others * Others
......
...@@ -5,7 +5,6 @@ References ...@@ -5,7 +5,6 @@ References
:maxdepth: 3 :maxdepth: 3
Command Line <NNICTLDOC> Command Line <NNICTLDOC>
Python API <sdk_reference>
Annotation <AnnotationSpec> Annotation <AnnotationSpec>
Configuration<ExperimentConfig> Configuration<ExperimentConfig>
Search Space <SearchSpaceSpec> Search Space <SearchSpaceSpec>
\ No newline at end of file
...@@ -12,7 +12,7 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no ...@@ -12,7 +12,7 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no
## Setup NNI environment ## Setup NNI environment
Install NNI on each of your machines following the install guide [here](GetStarted.md). Install NNI on each of your machines following the install guide [here](QuickStart.md).
## Run an experiment ## Run an experiment
...@@ -20,7 +20,7 @@ Install NNI on another machine which has network accessibility to those three ma ...@@ -20,7 +20,7 @@ Install NNI on another machine which has network accessibility to those three ma
We use `examples/trials/mnist-annotation` as an example here. `cat ~/nni/examples/trials/mnist-annotation/config_remote.yml` to see the detailed configuration file: We use `examples/trials/mnist-annotation` as an example here. `cat ~/nni/examples/trials/mnist-annotation/config_remote.yml` to see the detailed configuration file:
```yml ```yaml
authorName: default authorName: default
experimentName: example_mnist experimentName: example_mnist
trialConcurrency: 1 trialConcurrency: 1
......
...@@ -13,7 +13,7 @@ We conclude the search space as follow: ...@@ -13,7 +13,7 @@ We conclude the search space as follow:
6. ADD-SKIP (Identity between random layers). 6. ADD-SKIP (Identity between random layers).
7. REMOVE-SKIP (Removes random skip). 7. REMOVE-SKIP (Removes random skip).
![](../examples/trials/ga_squad/ga_squad.png) ![](https://github.com/Microsoft/nni/tree/master/examples/trials/ga_squad/ga_squad.png)
### New version ### New version
Also we have another version which time cost is less and performance is better. We will release soon. Also we have another version which time cost is less and performance is better. We will release soon.
...@@ -99,7 +99,7 @@ useAnnotation: false ...@@ -99,7 +99,7 @@ useAnnotation: false
#Your nni_manager ip #Your nni_manager ip
nniManagerIp: 10.10.10.10 nniManagerIp: 10.10.10.10
tuner: tuner:
codeDir: ../../tuners/ga_customer_tuner codeDir: https://github.com/Microsoft/nni/tree/master/examples/tuners/ga_customer_tuner
classFileName: customer_tuner.py classFileName: customer_tuner.py
className: CustomerTuner className: CustomerTuner
classArgs: classArgs:
......
How to start an experiment
===
## 1.Introduce
There are few steps to start an new experiment of NNI, here are the process.
<img src="./img/experiment_process.jpg" width="50%" height="50%" />
## 2.Details
### 2.1 Check environment
1. Check if there is an old experiment running
2. Check if the port of restfurl server is free.
3. Validate the content of config YAML file.
4. Prepare a config file to to record the information of this experiment.
### 2.2 Start restful server
Start an restful server process to manage NNI experiment, the default port is 8080.
### 2.3 Check restful server
Check whether restful server process is successfully started and could get a response when send message to restful server.
### 2.4 Set experiment config
Call restful server to set experiment config before starting an experiment, experiment config includes the config values in config YAML file.
### 2.5 Check experiment cofig
Check the response content of restful server, if the status code of response is 200, the config is successfully set.
### 2.6 Start Experiment
Call restful server process to setup an experiment.
### 2.7 Check experiment
1. Check the response of restful server.
2. Handle error information.
3. Print success or error information to screen.
4. Save configuration information to config file of nnictl.
...@@ -30,7 +30,7 @@ Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search s ...@@ -30,7 +30,7 @@ Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search s
- Get configuration from Tuner - Get configuration from Tuner
```json ```python
RECEIVED_PARAMS = nni.get_next_parameter() RECEIVED_PARAMS = nni.get_next_parameter()
``` ```
`RECEIVED_PARAMS` is an object, for example: `RECEIVED_PARAMS` is an object, for example:
...@@ -38,30 +38,30 @@ RECEIVED_PARAMS = nni.get_next_parameter() ...@@ -38,30 +38,30 @@ RECEIVED_PARAMS = nni.get_next_parameter()
- Report metric data periodically (optional) - Report metric data periodically (optional)
```json ```python
nni.report_intermediate_result(metrics) nni.report_intermediate_result(metrics)
``` ```
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](Builtin_Assessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy. `metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](Builtin_Assessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
- Report performance of the configuration - Report performance of the configuration
```json ```python
nni.report_final_result(metrics) nni.report_final_result(metrics)
``` ```
`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](Builtin-Tuner.md). `metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](tuners.md).
### Step 3 - Enable NNI API ### Step 3 - Enable NNI API
To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1): To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1):
```json ```yaml
useAnnotation: false useAnnotation: false
searchSpacePath: /path/to/your/search_space.json searchSpacePath: /path/to/your/search_space.json
``` ```
You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations. You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations.
*Please refer to [here]() for more APIs (e.g., `nni.get_sequence_id()`) provided by NNI. *Please refer to [here](sdk_reference.md) for more APIs (e.g., `nni.get_sequence_id()`) provided by NNI.
<a name="nni-annotation"></a> <a name="nni-annotation"></a>
...@@ -115,7 +115,7 @@ with tf.Session() as sess: ...@@ -115,7 +115,7 @@ with tf.Session() as sess:
- `@nni.variable` will take effect on its following line, which is an assignment statement whose leftvalue must be specified by the keyword `name` in `@nni.variable`. - `@nni.variable` will take effect on its following line, which is an assignment statement whose leftvalue must be specified by the keyword `name` in `@nni.variable`.
- `@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line. - `@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line.
For more information about annotation syntax and its usage, please refer to [Annotation README](../tools/nni_annotation/README.md) . For more information about annotation syntax and its usage, please refer to [Annotation](AnnotationSpec.md).
### Step 2 - Enable NNI Annotation ### Step 2 - Enable NNI Annotation
......
...@@ -77,7 +77,7 @@ We are ready for the experiment, let's now **run the config.yml file from your c ...@@ -77,7 +77,7 @@ We are ready for the experiment, let's now **run the config.yml file from your c
[2]: https://pytorch.org/ [2]: https://pytorch.org/
[3]: https://www.cs.toronto.edu/~kriz/cifar.html [3]: https://www.cs.toronto.edu/~kriz/cifar.html
[4]: https://github.com/Microsoft/nni/tree/master/examples/trials/cifar10_pytorch [4]: https://github.com/Microsoft/nni/tree/master/examples/trials/cifar10_pytorch
[5]: https://github.com/Microsoft/nni/blob/master/docs/howto_1_WriteTrial.md [5]: https://github.com/Microsoft/nni/blob/master/docs/Trials.md
[6]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/config.yml [6]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/config.yml
[7]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/config_pai.yml [7]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/config_pai.yml
[8]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/search_space.json [8]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/search_space.json
......
...@@ -16,8 +16,8 @@ ...@@ -16,8 +16,8 @@
# import sys # import sys
# sys.path.insert(0, os.path.abspath('.')) # sys.path.insert(0, os.path.abspath('.'))
import recommonmark
from recommonmark.parser import CommonMarkParser from recommonmark.parser import CommonMarkParser
from recommonmark.transform import AutoStructify
# -- Project information --------------------------------------------------- # -- Project information ---------------------------------------------------
...@@ -95,7 +95,7 @@ html_theme_options = { ...@@ -95,7 +95,7 @@ html_theme_options = {
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static'] # html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names # Custom sidebar templates, must be a dictionary that maps document names
# to template names. # to template names.
...@@ -185,3 +185,10 @@ epub_exclude_files = ['search.html'] ...@@ -185,3 +185,10 @@ epub_exclude_files = ['search.html']
# -- Extension configuration ------------------------------------------------- # -- Extension configuration -------------------------------------------------
github_doc_root = 'https://github.com/Microsoft/nni/tree/master/doc/'
def setup(app):
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url if url.startswith('..') else url,
'enable_auto_toc_tree': False,
}, True)
app.add_transform(AutoStructify)
...@@ -155,7 +155,7 @@ In the config file, you could set some settings including: ...@@ -155,7 +155,7 @@ In the config file, you could set some settings including:
* Algorithm setting: select `tuner` algorithm, `tuner optimize_mode`, etc. * Algorithm setting: select `tuner` algorithm, `tuner optimize_mode`, etc.
An config.yml as follow: An config.yml as follow:
```yml ```yaml
authorName: default authorName: default
experimentName: example_auto-gbdt experimentName: example_auto-gbdt
trialConcurrency: 1 trialConcurrency: 1
......
# Write a Trial Run on NNI
A **Trial** in NNI is an individual attempt at applying a set of parameters on a model.
To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`.
## NNI API
>Step 1 - Prepare a SearchSpace parameters file.
An example is shown below:
```json
{
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
"conv_size":{"_type":"choice","_value":[2,3,5,7]},
"hidden_size":{"_type":"choice","_value":[124, 512, 1024]},
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
```
Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search space.
>Step 2 - Update model codes
~~~~
2.1 Declare NNI API
Include `import nni` in your trial code to use NNI APIs.
2.2 Get predefined parameters
Use the following code snippet:
RECEIVED_PARAMS = nni.get_next_parameter()
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example:
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
2.3 Report NNI results
Use the API:
`nni.report_intermediate_result(accuracy)`
to send `accuracy` to assessor.
Use the API:
`nni.report_final_result(accuracy)`
to send `accuracy` to tuner.
~~~~
**NOTE**:
~~~~
accuracy - The `accuracy` could be any python object, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int).
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial).
tuner - The tuner will generate next parameters/architecture based on the explore history (final result of all trials).
~~~~
>Step 3 - Enable NNI API
To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1):
```
useAnnotation: false
searchSpacePath: /path/to/your/search_space.json
```
You can refer to [here](./ExperimentConfig.md) for more information about how to set up experiment configurations.
You can refer to [here](../examples/trials/README.md) for more information about how to write trial code using NNI APIs.
## NNI Python Annotation
An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to:
* annotate the variables you want to tune
* specify in which range you want to tune the variables
* annotate which variable you want to report as intermediate result to `assessor`
* annotate which variable you want to report as the final result (e.g. model accuracy) to `tuner`.
Again, take MNIST as an example, it only requires 2 steps to write a trial with NNI Annotation.
>Step 1 - Update codes with annotations
Please refer the following tensorflow code snippet for NNI Annotation, the highlighted 4 lines are annotations that help you to: (1) tune batch\_size and (2) dropout\_rate, (3) report test\_acc every 100 steps, and (4) at last report test\_acc as final result.
>What noteworthy is: as these new added codes are annotations, it does not actually change your previous codes logic, you can still run your code as usual in environments without NNI installed.
```diff
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
+ """@nni.variable(nni.choice(50, 250, 500), name=batch_size)"""
batch_size = 128
for i in range(10000):
batch = mnist.train.next_batch(batch_size)
+ """@nni.variable(nni.choice(1, 5), name=dropout_rate)"""
dropout_rate = 0.5
mnist_network.train_step.run(feed_dict={mnist_network.images: batch[0],
mnist_network.labels: batch[1],
mnist_network.keep_prob: dropout_rate})
if i % 100 == 0:
test_acc = mnist_network.accuracy.eval(
feed_dict={mnist_network.images: mnist.test.images,
mnist_network.labels: mnist.test.labels,
mnist_network.keep_prob: 1.0})
+ """@nni.report_intermediate_result(test_acc)"""
test_acc = mnist_network.accuracy.eval(
feed_dict={mnist_network.images: mnist.test.images,
mnist_network.labels: mnist.test.labels,
mnist_network.keep_prob: 1.0})
+ """@nni.report_final_result(test_acc)"""
```
>NOTE
>>`@nni.variable` will take effect on its following line
>>
>>`@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line.
>>
>>Please refer to [Annotation README](../tools/nni_annotation/README.md) for more information about annotation syntax and its usage.
>Step 2 - Enable NNI Annotation
In the YAML configure file, you need to set *useAnnotation* to true to enable NNI annotation:
```yml
useAnnotation: true
```
## More Trial Example
* [Automatic Model Architecture Search for Reading Comprehension.](../examples/trials/ga_squad/README.md)
# **How To** - Customize Your Own Tuner
*Tuner receive result from Trial as a matric to evaluate the performance of a specific parameters/architecture configure. And tuner send next hyper-parameter or architecture configure to Trial.*
So, if user want to implement a customized Tuner, she/he only need to:
1. Inherit a tuner of a base Tuner class
1. Implement receive_trial_result and generate_parameter function
1. Configure your customized tuner in experiment YAML config file
Here is an example:
**1) Inherit a tuner of a base Tuner class**
```python
from nni.tuner import Tuner
class CustomizedTuner(Tuner):
def __init__(self, ...):
...
```
**2) Implement receive_trial_result and generate_parameter function**
```python
from nni.tuner import Tuner
class CustomizedTuner(Tuner):
def __init__(self, ...):
...
def receive_trial_result(self, parameter_id, parameters, value):
'''
Record an observation of the objective function and Train
parameter_id: int
parameters: object created by 'generate_parameters()'
value: final metrics of the trial, including reward
'''
# your code implements here.
...
def generate_parameters(self, parameter_id):
'''
Returns a set of trial (hyper-)parameters, as a serializable object
parameter_id: int
'''
# your code implements here.
return your_parameters
...
```
`receive_trial_result` will receive the `parameter_id, parameters, value` as parameters input. Also, Tuner will receive the `value` object are exactly same value that Trial send.
The `your_parameters` return from `generate_parameters` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same `your_parameters` from Tuner.
For example:
If the you implement the `generate_parameters` like this:
```python
def generate_parameters(self, parameter_id):
'''
Returns a set of trial (hyper-)parameters, as a serializable object
parameter_id: int
'''
# your code implements here.
return {"dropout": 0.3, "learning_rate": 0.4}
```
It means your Tuner will always generate parameters `{"dropout": 0.3, "learning_rate": 0.4}`. Then Trial will receive `{"dropout": 0.3, "learning_rate": 0.4}` by calling API `nni.get_next_parameter()`. Once the trial ends with a result (normally some kind of metrics), it can send the result to Tuner by calling API `nni.report_final_result()`, for example `nni.report_final_result(0.93)`. Then your Tuner's `receive_trial_result` function will receied the result like:
```python
parameter_id = 82347
parameters = {"dropout": 0.3, "learning_rate": 0.4}
value = 0.93
```
**Note that** if you want to access a file (e.g., `data.txt`) in the directory of your own tuner, you cannot use `open('data.txt', 'r')`. Instead, you should use the following:
```python
_pwd = os.path.dirname(__file__)
_fd = open(os.path.join(_pwd, 'data.txt'), 'r')
```
This is because your tuner is not executed in the directory of your tuner (i.e., `pwd` is not the directory of your own tuner).
**3) Configure your customized tuner in experiment YAML config file**
NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor.
```yml
tuner:
codeDir: /home/abc/mytuner
classFileName: my_customized_tuner.py
className: CustomizedTuner
# Any parameter need to pass to your tuner class __init__ constructor
# can be specified in this optional classArgs field, for example
classArgs:
arg1: value1
```
More detail example you could see:
> * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner)
## Write a more advanced automl algorithm
The information above are usually enough to write a general tuner. However, users may also want more information, for example, intermediate results, trials' state (e.g., the information in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](../src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](./howto_3_CustomizedAdvisor.md) for how to write a customized advisor.
# **How To** - Customize Your Own Advisor
*Advisor targets the scenario that the automl algorithm wants the methods of both tuner and assessor. Advisor is similar to tuner on that it receives trial parameters request, final results, and generate trial parameters. Also, it is similar to assessor on that it receives intermediate results, trial's end state, and could send trial kill command. Note that, if you use Advisor, tuner and assessor are not allowed to be used at the same time.*
So, if user want to implement a customized Advisor, she/he only need to:
1. Define an Advisor inheriting from the MsgDispatcherBase class
1. Implement the methods with prefix `handle_` except `handle_request`
1. Configure your customized Advisor in experiment YAML config file
Here is an example:
**1) Define an Advisor inheriting from the MsgDispatcherBase class**
```python
from nni.msg_dispatcher_base import MsgDispatcherBase
class CustomizedAdvisor(MsgDispatcherBase):
def __init__(self, ...):
...
```
**2) Implement the methods with prefix `handle_` except `handle_request`**
Please refer to the implementation of Hyperband ([src/sdk/pynni/nni/hyperband_advisor/hyperband_advisor.py](../src/sdk/pynni/nni/hyperband_advisor/hyperband_advisor.py)) for how to implement the methods.
**3) Configure your customized Advisor in experiment YAML config file**
Similar to tuner and assessor. NNI needs to locate your customized Advisor class and instantiate the class, so you need to specify the location of the customized Advisor class and pass literal values as parameters to the \_\_init__ constructor.
```yml
advisor:
codeDir: /home/abc/myadvisor
classFileName: my_customized_advisor.py
className: CustomizedAdvisor
# Any parameter need to pass to your advisor class __init__ constructor
# can be specified in this optional classArgs field, for example
classArgs:
arg1: value1
```
...@@ -12,7 +12,7 @@ Contents ...@@ -12,7 +12,7 @@ Contents
:titlesonly: :titlesonly:
Overview Overview
GetStarted<QuickStart> QuickStart<QuickStart>
Tutorials Tutorials
Examples Examples
Reference Reference
......
...@@ -16,7 +16,7 @@ To use multi-phase experiment, please follow below steps: ...@@ -16,7 +16,7 @@ To use multi-phase experiment, please follow below steps:
1. Set `multiPhase` field to `true`, and configure your tuner implemented in step 1 as customized tuner in configuration file, for example: 1. Set `multiPhase` field to `true`, and configure your tuner implemented in step 1 as customized tuner in configuration file, for example:
```yml ```yaml
... ...
multiPhase: true multiPhase: true
tuner: tuner:
......
...@@ -19,18 +19,12 @@ API for tuners ...@@ -19,18 +19,12 @@ API for tuners
.. autoclass:: nni.hyperopt_tuner.hyperopt_tuner.HyperoptTuner .. autoclass:: nni.hyperopt_tuner.hyperopt_tuner.HyperoptTuner
:members: :members:
.. autoclass:: nni.batch_tuner.batch_tuner.BatchTuner
:members:
.. autoclass:: nni.evolution_tuner.evolution_tuner.EvolutionTuner .. autoclass:: nni.evolution_tuner.evolution_tuner.EvolutionTuner
:members: :members:
.. autoclass:: nni.gridsearch_tuner.gridsearch_tuner.GridSearchTuner .. autoclass:: nni.gridsearch_tuner.gridsearch_tuner.GridSearchTuner
:members: :members:
.. autoclass:: nni.networkmorphism_tuner.networkmorphism_tuner.NetworkMorphismTuner
:members:
.. autoclass:: nni.smac_tuner.smac_tuner.SMACTuner .. autoclass:: nni.smac_tuner.smac_tuner.SMACTuner
:members: :members:
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
NNI supports many kinds of tuning algorithms to search the best models and/or hyper-parameters for scikit-learn, and support many kinds of environments like local machine, remote servers and cloud. NNI supports many kinds of tuning algorithms to search the best models and/or hyper-parameters for scikit-learn, and support many kinds of environments like local machine, remote servers and cloud.
## 1. How to run the example ## 1. How to run the example
To start using NNI, you should install the nni package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please [refer](GetStarted.md). To start using NNI, you should install the nni package, and use the command line tool `nnictl` to start an experiment. For more information about installation and preparing for the environment, please [refer](QuickStart.md).
After you installed NNI, you could enter the corresponding folder and start the experiment using following commands: After you installed NNI, you could enter the corresponding folder and start the experiment using following commands:
``` ```
nnictl create --config ./config.yml nnictl create --config ./config.yml
......
...@@ -83,16 +83,16 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install ...@@ -83,16 +83,16 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install
python ~/nni/examples/trials/mnist-annotation/mnist.py python ~/nni/examples/trials/mnist-annotation/mnist.py
This command will be filled in the YAML configure file below. Please refer to [here](./howto_1_WriteTrial.md) for how to write your own trial. This command will be filled in the YAML configure file below. Please refer to [here](Trials.md) for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](./howto_2_CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below: **Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](Customize_Tuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
tuner: tuner:
builtinTunerName: TPE builtinTunerName: TPE
classArgs: classArgs:
optimize_mode: maximize optimize_mode: maximize
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here]()), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. *builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here](Builtin_Tuner.md)), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the YAML configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below: **Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the YAML configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
...@@ -124,7 +124,7 @@ trial: ...@@ -124,7 +124,7 @@ trial:
gpuNum: 0 gpuNum: 0
``` ```
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../tools/annotation/README.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires. Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](AnnotationSpec.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
With all these steps done, we can run the experiment with the following command: With all these steps done, we can run the experiment with the following command:
......
**Tutorial: Run an experiment on multiple machines**
===
NNI supports running an experiment on multiple machines through SSH channel, called `remote` mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code.
e.g. Three machines and you login in with account `bob` (Note: the account is not necessarily the same on different machine):
| IP | Username| Password |
| -------- |---------|-------|
| 10.1.1.1 | bob | bob123 |
| 10.1.1.2 | bob | bob123 |
| 10.1.1.3 | bob | bob123 |
## Setup NNI environment
Install NNI on each of your machines following the install guide [here](GetStarted.md).
For remote machines that are used only to run trials but not the nnictl, you can just install python SDK:
* __Install python SDK through pip__
python3 -m pip install --user --upgrade nni-sdk
## Run an experiment
Install NNI on another machine which has network accessibility to those three machines above, or you can just use any machine above to run nnictl command line tool.
We use `examples/trials/mnist-annotation` as an example here. `cat ~/nni/examples/trials/mnist-annotation/config_remote.yml` to see the detailed configuration file:
```
authorName: default
experimentName: example_mnist
trialConcurrency: 1
maxExecDuration: 1h
maxTrialNum: 10
#choice: local, remote, pai
trainingServicePlatform: remote
#choice: true, false
useAnnotation: true
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
trial:
command: python3 mnist.py
codeDir: .
gpuNum: 0
#machineList can be empty if the platform is local
machineList:
- ip: 10.1.1.1
username: bob
passwd: bob123
#port can be skip if using default ssh port 22
#port: 22
- ip: 10.1.1.2
username: bob
passwd: bob123
- ip: 10.1.1.3
username: bob
passwd: bob123
```
Simply filling the `machineList` section and then run:
```
nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml
```
to start the experiment.
# Tutorial - Try different Tuners and Assessors
NNI provides an easy to adopt approach to set up parameter tuning algorithms as well as early stop policies, we call them **Tuners** and **Assessors**.
**Tuner** specifies the algorithm you use to generate hyperparameter sets for each trial. In NNI, we support two approaches to set the tuner.
1. Directly use tuner provided by NNI sdk
required fields: builtinTunerName and classArgs.
2. Customize your own tuner file
required fields: codeDirectory, classFileName, className and classArgs.
### **Learn More about tuners**
* For detailed defintion and usage about the required field, please refer to [Config an experiment](ExperimentConfig.md)
* [Tuners in the latest NNI release](HowToChooseTuner.md)
* [How to implement your own tuner](howto_2_CustomizedTuner.md)
**Assessor** specifies the algorithm you use to apply early stop policy. In NNI, there are two approaches to set the assessor.
1. Directly use assessor provided by NNI sdk
required fields: builtinAssessorName and classArgs.
2. Customize your own assessor file
required fields: codeDirectory, classFileName, className and classArgs.
### **Learn More about assessor**
* For detailed defintion and usage aobut the required field, please refer to [Config an experiment](ExperimentConfig.md)
* Find more about the detailed instruction about [enable assessor](EnableAssessor.md)
* [How to implement your own assessor](../examples/assessors/README.md)
## **Learn More**
* [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md)
* [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md)
...@@ -20,7 +20,7 @@ pip install -r requirements.txt ...@@ -20,7 +20,7 @@ pip install -r requirements.txt
Modify `examples/trials/network_morphism/cifar10/config.yml` to fit your own task, note that searchSpacePath is not required in our configuration. Here is the default configuration: Modify `examples/trials/network_morphism/cifar10/config.yml` to fit your own task, note that searchSpacePath is not required in our configuration. Here is the default configuration:
```yml ```yaml
authorName: default authorName: default
experimentName: example_cifar10-network-morphism experimentName: example_cifar10-network-morphism
trialConcurrency: 1 trialConcurrency: 1
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment