Unverified Commit efa479b0 authored by Chi Song's avatar Chi Song Committed by GitHub
Browse files

Doc fix: formats and typo. (#582)

Fix document with formats and typos
parent 0405a426
...@@ -67,3 +67,5 @@ typings/ ...@@ -67,3 +67,5 @@ typings/
__pycache__ __pycache__
build build
*.egg-info *.egg-info
.vscode
...@@ -134,4 +134,3 @@ We are in construction of the instruction for [How to Debug](docs/HowToDebug.md) ...@@ -134,4 +134,3 @@ We are in construction of the instruction for [How to Debug](docs/HowToDebug.md)
## **License** ## **License**
The entire codebase is under [MIT license](https://github.com/Microsoft/nni/blob/master/LICENSE) The entire codebase is under [MIT license](https://github.com/Microsoft/nni/blob/master/LICENSE)
...@@ -2,27 +2,35 @@ ...@@ -2,27 +2,35 @@
=== ===
## **Installation** ## **Installation**
* __Dependencies__ * __Dependencies__
python >= 3.5 ```bash
git python >= 3.5
wget git
wget
```
python pip should also be correctly installed. You could use "python3 -m pip -v" to check in Linux.
python pip should also be correctly installed. You could use "python3 -m pip -v" to check in Linux. * Note: we don't support virtual environment in current releases.
* Note: we don't support virtual environment in current releases.
* __Install NNI through pip__ * __Install NNI through pip__
python3 -m pip install --user --upgrade nni ```bash
python3 -m pip install --user --upgrade nni
```
* __Install NNI through source code__ * __Install NNI through source code__
git clone -b v0.4.1 https://github.com/Microsoft/nni.git ```bash
cd nni git clone -b v0.4.1 https://github.com/Microsoft/nni.git
source install.sh cd nni
source install.sh
```
## **Quick start: run a customized experiment** ## **Quick start: run a customized experiment**
An experiment is to run multiple trial jobs, each trial job tries a configuration which includes a specific neural architecture (or model) and hyper-parameter values. To run an experiment through NNI, you should: An experiment is to run multiple trial jobs, each trial job tries a configuration which includes a specific neural architecture (or model) and hyper-parameter values. To run an experiment through NNI, you should:
* Provide a runnable trial * Provide a runnable trial
...@@ -32,22 +40,26 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio ...@@ -32,22 +40,26 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio
**Prepare trial**: Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example: **Prepare trial**: Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example:
python3 ~/nni/examples/trials/mnist-annotation/mnist.py ```bash
python3 ~/nni/examples/trials/mnist-annotation/mnist.py
```
This command will be filled in the yaml configure file below. Please refer to [here](howto_1_WriteTrial.md) for how to write your own trial. This command will be filled in the yaml configure file below. Please refer to [here](howto_1_WriteTrial.md) for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](howto_2_CustomizedTuner.md), but for simplicity, here we choose a tuner provided by NNI as below: **Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](howto_2_CustomizedTuner.md), but for simplicity, here we choose a tuner provided by NNI as below:
tuner: ```yaml
builtinTunerName: TPE tuner:
classArgs: builtinTunerName: TPE
optimize_mode: maximize classArgs:
optimize_mode: maximize
```
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner, *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. *builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner, *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below: **Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
``` ```yaml
authorName: your_name authorName: your_name
experimentName: auto_mnist experimentName: auto_mnist
...@@ -73,7 +85,7 @@ trial: ...@@ -73,7 +85,7 @@ trial:
command: python mnist.py command: python mnist.py
codeDir: ~/nni/examples/trials/mnist-annotation codeDir: ~/nni/examples/trials/mnist-annotation
gpuNum: 0 gpuNum: 0
``` ```
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../tools/annotation/README.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires. Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../tools/annotation/README.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
...@@ -87,6 +99,7 @@ You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command l ...@@ -87,6 +99,7 @@ You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command l
The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`. The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`.
## Read more ## Read more
* [Tuners supported in the latest NNI release](./HowToChooseTuner.md) * [Tuners supported in the latest NNI release](./HowToChooseTuner.md)
* [Overview](Overview.md) * [Overview](Overview.md)
* [Installation](Installation.md) * [Installation](Installation.md)
......
**How to Debug in NNI** **How to Debug in NNI**
=== ===
*Coming soon* *Coming soon*
\ No newline at end of file
...@@ -6,25 +6,31 @@ Currently we only support installation on Linux & Mac. ...@@ -6,25 +6,31 @@ Currently we only support installation on Linux & Mac.
## **Installation** ## **Installation**
* __Dependencies__ * __Dependencies__
python >= 3.5 ```bash
git python >= 3.5
wget git
wget
```
python pip should also be correctly installed. You could use "python3 -m pip -v" to check pip version. python pip should also be correctly installed. You could use "python3 -m pip -v" to check pip version.
* __Install NNI through pip__ * __Install NNI through pip__
python3 -m pip install --user --upgrade nni ```bash
python3 -m pip install --user --upgrade nni
```
* __Install NNI through source code__ * __Install NNI through source code__
git clone -b v0.4.1 https://github.com/Microsoft/nni.git ```bash
cd nni git clone -b v0.4.1 https://github.com/Microsoft/nni.git
source install.sh cd nni
source install.sh
```
* __Install NNI in docker image__ * __Install NNI in docker image__
You can also install NNI in a docker image. Please follow the instructions [here](../deployment/docker/README.md) to build NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`. You can also install NNI in a docker image. Please follow the instructions [here](../deployment/docker/README.md) to build NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`.
## **System requirements** ## **System requirements**
...@@ -52,8 +58,8 @@ Below are the minimum system requirements for NNI on macOS. Due to potential pro ...@@ -52,8 +58,8 @@ Below are the minimum system requirements for NNI on macOS. Due to potential pro
|**Internet**|Boardband internet connection| |**Internet**|Boardband internet connection|
|**Resolution**|1024 x 768 minimum display resolution| |**Resolution**|1024 x 768 minimum display resolution|
## Further reading ## Further reading
* [Overview](Overview.md) * [Overview](Overview.md)
* [Use command line tool nnictl](NNICTLDOC.md) * [Use command line tool nnictl](NNICTLDOC.md)
* [Use NNIBoard](WebUI.md) * [Use NNIBoard](WebUI.md)
......
...@@ -43,7 +43,7 @@ kubeflowConfig: ...@@ -43,7 +43,7 @@ kubeflowConfig:
``` ```
If users want to use tf-operator, he could set `ps` and `worker` in trial config. If users want to use pytorch-operator, he could set `master` and `worker` in trial config. If users want to use tf-operator, he could set `ps` and `worker` in trial config. If users want to use pytorch-operator, he could set `master` and `worker` in trial config.
## Supported sotrage type ## Supported storage type
NNI support NFS and Azure Storage to store the code and output files, users could set storage type in config file and set the corresponding config. NNI support NFS and Azure Storage to store the code and output files, users could set storage type in config file and set the corresponding config.
The setting for NFS storage are as follows: The setting for NFS storage are as follows:
``` ```
...@@ -197,4 +197,3 @@ Notice: In kubeflow mode, NNIManager will start a rest server and listen on a po ...@@ -197,4 +197,3 @@ Notice: In kubeflow mode, NNIManager will start a rest server and listen on a po
Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information. Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
Any problems when using NNI in kubeflow mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni), or send mail to nni@microsoft.com Any problems when using NNI in kubeflow mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni), or send mail to nni@microsoft.com
nnictl nnictl
=== ===
## Introduction ## Introduction
__nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. __nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc.
## Commands ## Commands
nnictl support commands: nnictl support commands:
```
```bash
nnictl create nnictl create
nnictl stop nnictl stop
nnictl update nnictl update
...@@ -19,359 +24,403 @@ nnictl tensorboard ...@@ -19,359 +24,403 @@ nnictl tensorboard
nnictl top nnictl top
nnictl --version nnictl --version
``` ```
### Manage an experiment ### Manage an experiment
* __nnictl create__
* Description * __nnictl create__
You can use this command to create a new experiment, using the configuration specified in config file. * Description
After this command is successfully done, the context will be set as this experiment,
which means the following command you issued is associated with this experiment, You can use this command to create a new experiment, using the configuration specified in config file. After this command is successfully done, the context will be set as this experiment, which means the following command you issued is associated with this experiment, unless you explicitly changes the context(not supported yet).
unless you explicitly changes the context(not supported yet).
* Usage * Usage
nnictl create [OPTIONS] ```bash
nnictl create [OPTIONS]
Options: ```
| Name, shorthand | Required|Default | Description | Options:
| ------ | ------ | ------ |------ | | Name, shorthand | Required|Default | Description |
| --config, -c| True| |yaml configure file of the experiment| | ------ | ------ | ------ |------ |
| --port, -p | False| |the port of restful server| | --config, -c| True| |yaml configure file of the experiment|
| --port, -p | False| |the port of restful server|
* __nnictl resume__ * __nnictl resume__
* Description * Description
You can use this command to resume a stopped experiment. You can use this command to resume a stopped experiment.
* Usage * Usage
nnictl resume [OPTIONS] ```bash
Options: nnictl resume [OPTIONS]
```
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |The id of the experiment you want to resume| | id| False| |The id of the experiment you want to resume|
| --port, -p| False| |Rest port of the experiment you want to resume| | --port, -p| False| |Rest port of the experiment you want to resume|
* __nnictl stop__ * __nnictl stop__
* Description * Description
You can use this command to stop a running experiment or multiple experiments. You can use this command to stop a running experiment or multiple experiments.
* Usage * Usage
nnictl stop [id] ```bash
nnictl stop [id]
```
* Detail * Detail
1.If there is an id specified, and the id matches the running experiment, nnictl will stop the corresponding experiment, or will print error message. 1. If there is an id specified, and the id matches the running experiment, nnictl will stop the corresponding experiment, or will print error message.
2.If there is no id specified, and there is an experiment running, stop the running experiment, or print error message.
3.If the id ends with *, nnictl will stop all experiments whose ids matchs the regular. 2. If there is no id specified, and there is an experiment running, stop the running experiment, or print error message.
4.If the id does not exist but match the prefix of an experiment id, nnictl will stop the matched experiment.
5.If the id does not exist but match multiple prefix of the experiment ids, nnictl will give id information. 3. If the id ends with *, nnictl will stop all experiments whose ids matchs the regular.
6.Users could use 'nnictl stop all' to stop all experiments
4. If the id does not exist but match the prefix of an experiment id, nnictl will stop the matched experiment.
5. If the id does not exist but match multiple prefix of the experiment ids, nnictl will give id information.
6. Users could use 'nnictl stop all' to stop all experiments
* __nnictl update__ * __nnictl update__
* __nnictl update searchspace__ * __nnictl update searchspace__
* Description * Description
You can use this command to update an experiment's search space. You can use this command to update an experiment's search space.
* Usage * Usage
nnictl update searchspace [OPTIONS] ```bash
nnictl update searchspace [OPTIONS]
Options: ```
| Name, shorthand | Required|Default | Description | Options:
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| | Name, shorthand | Required|Default | Description |
| --filename, -f| True| |the file storing your new search space| | ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
* __nnictl update concurrency__ | --filename, -f| True| |the file storing your new search space|
* Description
* __nnictl update concurrency__
You can use this command to update an experiment's concurrency. * Description
* Usage You can use this command to update an experiment's concurrency.
nnictl update concurrency [OPTIONS] * Usage
Options: ```bash
nnictl update concurrency [OPTIONS]
| Name, shorthand | Required|Default | Description | ```
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| Options:
| --value, -v| True| |the number of allowed concurrent trials|
| Name, shorthand | Required|Default | Description |
* __nnictl update duration__ | ------ | ------ | ------ |------ |
* Description | id| False| |ID of the experiment you want to set|
| --value, -v| True| |the number of allowed concurrent trials|
You can use this command to update an experiment's concurrency.
* __nnictl update duration__
* Usage * Description
nnictl update duration [OPTIONS] You can use this command to update an experiment's duration.
Options: * Usage
| Name, shorthand | Required|Default | Description | ```bash
| ------ | ------ | ------ |------ | nnictl update duration [OPTIONS]
| id| False| |ID of the experiment you want to set| ```
| --value, -v| True| |the experiment duration will be NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.|
Options:
* __nnictl update trialnum__
* Description | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
You can use this command to update an experiment's maxtrialnum. | id| False| |ID of the experiment you want to set|
| --value, -v| True| |the experiment duration will be NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.|
* Usage
* __nnictl update trialnum__
nnictl update trialnum [OPTIONS] * Description
Options: You can use this command to update an experiment's maxtrialnum.
| Name, shorthand | Required|Default | Description | * Usage
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| ```bash
| --value, -v| True| |the new number of maxtrialnum you want to set| nnictl update trialnum [OPTIONS]
```
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
| --value, -v| True| |the new number of maxtrialnum you want to set|
* __nnictl trial__ * __nnictl trial__
* __nnictl trial ls__ * __nnictl trial ls__
* Description * Description
You can use this command to show trial's information. You can use this command to show trial's information.
* Usage * Usage
nnictl trial ls ```bash
nnictl trial ls
```
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| | id| False| |ID of the experiment you want to set|
* __nnictl trial kill__ * __nnictl trial kill__
* Description * Description
You can use this command to kill a trial job. You can use this command to kill a trial job.
* Usage
* Usage
nnictl trial kill [OPTIONS]
```bash
Options: nnictl trial kill [OPTIONS]
```
| Name, shorthand | Required|Default | Description | Options:
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| | Name, shorthand | Required|Default | Description |
| --trialid, -t| True| |ID of the trial you want to kill.| | ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
| --trialid, -t| True| |ID of the trial you want to kill.|
* __nnictl top__ * __nnictl top__
* Description
Monitor all of running experiments.
* Usage
nnictl top
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
| --time, -t| False| |The interval to update the experiment status, the unit of time is second, and the default value is 3 second.|
* Description
Monitor all of running experiments.
* Usage
```bash
nnictl top
```
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
| --time, -t| False| |The interval to update the experiment status, the unit of time is second, and the default value is 3 second.|
### Manage experiment information ### Manage experiment information
* __nnictl experiment show__ * __nnictl experiment show__
* Description * Description
Show the information of experiment.
* Usage
nnictl experiment show
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
Show the information of experiment.
* Usage
```bash
nnictl experiment show
```
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
* __nnictl experiment status__ * __nnictl experiment status__
* Description * Description
Show the status of experiment.
* Usage
nnictl experiment status
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
Show the status of experiment.
* Usage
```bash
nnictl experiment status
```
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
* __nnictl experiment list__ * __nnictl experiment list__
* Description * Description
Show the information of all the (running) experiments.
* Usage
nnictl experiment list
Options: Show the information of all the (running) experiments.
| Name, shorthand | Required|Default | Description | * Usage
| ------ | ------ | ------ |------ |
| all| False| False|Show all of experiments, including stopped experiments.|
```bash
nnictl experiment list
```
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| all| False| False|Show all of experiments, including stopped experiments.|
* __nnictl config show__ * __nnictl config show__
* Description * Description
Display the current context information. Display the current context information.
* Usage * Usage
nnictl config show ```bash
nnictl config show
```
### Manage log ### Manage log
* __nnictl log stdout__ * __nnictl log stdout__
* Description
* Description
Show the stdout log content.
Show the stdout log content.
* Usage
* Usage
nnictl log stdout [options]
```bash
Options: nnictl log stdout [options]
```
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | Options:
| id| False| |ID of the experiment you want to set|
| --head, -h| False| |show head lines of stdout| | Name, shorthand | Required|Default | Description |
| --tail, -t| False| |show tail lines of stdout| | ------ | ------ | ------ |------ |
| --path, -p| False| |show the path of stdout file| | id| False| |ID of the experiment you want to set|
| --head, -h| False| |show head lines of stdout|
| --tail, -t| False| |show tail lines of stdout|
| --path, -p| False| |show the path of stdout file|
* __nnictl log stderr__ * __nnictl log stderr__
* Description * Description
Show the stderr log content. Show the stderr log content.
* Usage * Usage
nnictl log stderr [options] ```bash
nnictl log stderr [options]
Options: ```
| Name, shorthand | Required|Default | Description | Options:
| ------ | ------ | ------ |------ |
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| | id| False| |ID of the experiment you want to set|
| --head, -h| False| |show head lines of stderr| | --head, -h| False| |show head lines of stderr|
| --tail, -t| False| |show tail lines of stderr| | --tail, -t| False| |show tail lines of stderr|
| --path, -p| False| |show the path of stderr file| | --path, -p| False| |show the path of stderr file|
* __nnictl log trial__ * __nnictl log trial__
* Description * Description
Show trial log path. Show trial log path.
* Usage * Usage
nnictl log trial [options]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |the id of trial|
```bash
nnictl log trial [options]
```
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |the id of trial|
### Manage webui ### Manage webui
* __nnictl webui url__ * __nnictl webui url__
* Description
Show the urls of the experiment.
* Usage
nnictl webui url
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
* Description
Show the urls of the experiment.
* Usage
```bash
nnictl webui url
```
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
### Manage tensorboard ### Manage tensorboard
* __nnictl tensorboard start__ * __nnictl tensorboard start__
* Description
* Description
Start the tensorboard process.
Start the tensorboard process.
* Usage
* Usage
nnictl tensorboard start
```bash
Options: nnictl tensorboard start
```
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | Options:
| id| False| |ID of the experiment you want to set|
| --trialid| False| |ID of the trial| | Name, shorthand | Required|Default | Description |
| --port| False| 6006|The port of the tensorboard process| | ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
* Detail | --trialid| False| |ID of the trial|
| --port| False| 6006|The port of the tensorboard process|
1. NNICTL support tensorboard function in local and remote platform for the moment, other platforms will be supported later.
2. If you want to use tensorboard, you need to write your tensorboard log data to environment variable [NNI_OUTPUT_DIR] path. * Detail
3. In local mode, nnictl will set --logdir=[NNI_OUTPUT_DIR] directly and start a tensorboard process.
4. In remote mode, nnictl will create a ssh client to copy log data from remote machine to local temp directory firstly, and then start a tensorboard process in your local machine. You need to notice that nnictl only copy the log data one time when you use the command, if you want to see the later result of tensorboard, you should execute nnictl tensorboard command again. 1. NNICTL support tensorboard function in local and remote platform for the moment, other platforms will be supported later.
5. If there is only one trial job, you don't need to set trialid. If there are multiple trial jobs running, you should set the trialid, or you could use [nnictl tensorboard start --trialid all] to map --logdir to all trial log paths. 2. If you want to use tensorboard, you need to write your tensorboard log data to environment variable [NNI_OUTPUT_DIR] path.
3. In local mode, nnictl will set --logdir=[NNI_OUTPUT_DIR] directly and start a tensorboard process.
4. In remote mode, nnictl will create a ssh client to copy log data from remote machine to local temp directory firstly, and then start a tensorboard process in your local machine. You need to notice that nnictl only copy the log data one time when you use the command, if you want to see the later result of tensorboard, you should execute nnictl tensorboard command again.
5. If there is only one trial job, you don't need to set trialid. If there are multiple trial jobs running, you should set the trialid, or you could use [nnictl tensorboard start --trialid all] to map --logdir to all trial log paths.
* __nnictl tensorboard stop__ * __nnictl tensorboard stop__
* Description * Description
Stop all of the tensorboard process. Stop all of the tensorboard process.
* Usage * Usage
nnictl tensorboard stop ```bash
nnictl tensorboard stop
Options: ```
| Name, shorthand | Required|Default | Description | Options:
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set|
### Check nni version ### Check nni version
* __nnictl --version__ * __nnictl --version__
* Description * Description
Describe the current version of nni installed. Describe the current version of nni installed.
* Usage * Usage
nnictl --version ```bash
\ No newline at end of file nnictl --version
```
...@@ -22,7 +22,7 @@ After user submits the experiment through a command line tool [nnictl](../tools/ ...@@ -22,7 +22,7 @@ After user submits the experiment through a command line tool [nnictl](../tools/
User can use the nnictl and/or a visualized Web UI nniboard to monitor and debug a given experiment. User can use the nnictl and/or a visualized Web UI nniboard to monitor and debug a given experiment.
NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment. NNI provides a set of examples in the package to get you familiar with the above process.
## Key Concepts ## Key Concepts
...@@ -46,4 +46,4 @@ NNI provides a set of examples in the package to get you familiar with the above ...@@ -46,4 +46,4 @@ NNI provides a set of examples in the package to get you familiar with the above
### **Tutorials** ### **Tutorials**
* [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md) * [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md)
* [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md) * [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md) * [How to run an experiment on OpenPAI?](PAIMode.md)
\ No newline at end of file
**Run an Experiment on OpenPAI** **Run an Experiment on OpenPAI**
=== ===
NNI supports running an experiment on [OpenPAI](https://github.com/Microsoft/pai) (aka pai), called pai mode. Before starting to use NNI pai mode, you should have an account to access an [OpenPAI](https://github.com/Microsoft/pai) cluster. See [here](https://github.com/Microsoft/pai#how-to-deploy) if you don't have any OpenPAI account and want to deploy an OpenPAI cluster. In pai mode, your trial program will run in pai's container created by Docker. NNI supports running an experiment on [OpenPAI](https://github.com/Microsoft/pai) (aka pai), called pai mode. Before starting to use NNI pai mode, you should have an account to access an [OpenPAI](https://github.com/Microsoft/pai) cluster. See [here](https://github.com/Microsoft/pai#how-to-deploy) if you don't have any OpenPAI account and want to deploy an OpenPAI cluster. In pai mode, your trial program will run in pai's container created by Docker.
## Setup environment ## Setup environment
Install NNI, follow the install guide [here](GetStarted.md). Install NNI, follow the install guide [here](GetStarted.md).
## Run an experiment ## Run an experiment
Use `examples/trials/mnist-annotation` as an example. The nni config yaml file's content is like: Use `examples/trials/mnist-annotation` as an example. The nni config yaml file's content is like:
```
```yaml
authorName: your_name authorName: your_name
experimentName: auto_mnist experimentName: auto_mnist
# how many trials could be concurrently running # how many trials could be concurrently running
...@@ -39,7 +40,8 @@ paiConfig: ...@@ -39,7 +40,8 @@ paiConfig:
passWord: your_pai_password passWord: your_pai_password
host: 10.1.1.1 host: 10.1.1.1
``` ```
Note: You should set `trainingServicePlatform: pai` in nni config yaml file if you want to start experiment in pai mode.
Note: You should set `trainingServicePlatform: pai` in nni config yaml file if you want to start experiment in pai mode.
Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial configuration in pai mode have five additional keys: Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial configuration in pai mode have five additional keys:
* cpuNum * cpuNum
...@@ -58,7 +60,7 @@ Once complete to fill nni experiment config file and save (for example, save as ...@@ -58,7 +60,7 @@ Once complete to fill nni experiment config file and save (for example, save as
``` ```
nnictl create --config exp_pai.yaml nnictl create --config exp_pai.yaml
``` ```
to start the experiment in pai mode. NNI will create OpanPAI job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`. to start the experiment in pai mode. NNI will create OpenPAI job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
You can see the pai jobs created by NNI in your OpenPAI cluster's web portal, like: You can see the pai jobs created by NNI in your OpenPAI cluster's web portal, like:
![](./img/nni_pai_joblist.jpg) ![](./img/nni_pai_joblist.jpg)
...@@ -77,4 +79,3 @@ You can see there're three fils in output folder: stderr, stdout, and trial.log ...@@ -77,4 +79,3 @@ You can see there're three fils in output folder: stderr, stdout, and trial.log
If you also want to save trial's other output into HDFS, like model files, you can use environment variable `NNI_OUTPUT_DIR` in your trial code to save your own output files, and NNI SDK will copy all the files in `NNI_OUTPUT_DIR` from trial's container to HDFS. If you also want to save trial's other output into HDFS, like model files, you can use environment variable `NNI_OUTPUT_DIR` in your trial code to save your own output files, and NNI SDK will copy all the files in `NNI_OUTPUT_DIR` from trial's container to HDFS.
Any problems when using NNI in pai mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni), or send mail to nni@microsoft.com Any problems when using NNI in pai mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni), or send mail to nni@microsoft.com
...@@ -101,5 +101,3 @@ Initial release of Neural Network Intelligence (NNI). ...@@ -101,5 +101,3 @@ Initial release of Neural Network Intelligence (NNI).
## Known Issues ## Known Issues
[Known Issues in release 0.1.0](https://github.com/Microsoft/nni/labels/nni010knownissues). [Known Issues in release 0.1.0](https://github.com/Microsoft/nni/labels/nni010knownissues).
**Run an Experiment on Multiple Machines** **Run an Experiment on Multiple Machines**
=== ===
NNI supports running an experiment on multiple machines through SSH channel, called `remote` mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code.
e.g. Three machines and you login in with account `bob` (Note: the account is not necessarily the same on different machine): NNI supports running an experiment on multiple machines through SSH channel, called `remote` mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code.
e.g. Three machines and you login in with account `bob` (Note: the account is not necessarily the same on different machine):
| IP | Username| Password | | IP | Username| Password |
| -------- |---------|-------| | -------- |---------|-------|
...@@ -11,19 +13,24 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no ...@@ -11,19 +13,24 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no
| 10.1.1.3 | bob | bob123 | | 10.1.1.3 | bob | bob123 |
## Setup NNI environment ## Setup NNI environment
Install NNI on each of your machines following the install guide [here](GetStarted.md). Install NNI on each of your machines following the install guide [here](GetStarted.md).
For remote machines that are used only to run trials but not the nnictl, you can just install python SDK: For remote machines that are used only to run trials but not the nnictl, you can just install python SDK:
* __Install python SDK through pip__ * __Install python SDK through pip__
python3 -m pip install --user --upgrade nni-sdk ```bash
python3 -m pip install --user --upgrade nni-sdk
```
## Run an experiment ## Run an experiment
Install NNI on another machine which has network accessibility to those three machines above, or you can just use any machine above to run nnictl command line tool. Install NNI on another machine which has network accessibility to those three machines above, or you can just use any machine above to run nnictl command line tool.
We use `examples/trials/mnist-annotation` as an example here. `cat ~/nni/examples/trials/mnist-annotation/config_remote.yml` to see the detailed configuration file: We use `examples/trials/mnist-annotation` as an example here. `cat ~/nni/examples/trials/mnist-annotation/config_remote.yml` to see the detailed configuration file:
```
```yaml
authorName: default authorName: default
experimentName: example_mnist experimentName: example_mnist
trialConcurrency: 1 trialConcurrency: 1
...@@ -58,8 +65,11 @@ machineList: ...@@ -58,8 +65,11 @@ machineList:
username: bob username: bob
passwd: bob123 passwd: bob123
``` ```
Simply filling the `machineList` section and then run: Simply filling the `machineList` section and then run:
```
```bash
nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml
``` ```
to start the experiment.
to start the experiment.
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
``` ```
The example define ```dropout_rate``` as variable which priori distribution is uniform distribution, and its value from ```0.1``` and ```0.5```. The example define `dropout_rate` as variable which priori distribution is uniform distribution, and its value from `0.1` and `0.5`.
The tuner will sample parameters/architecture by understanding the search space first. The tuner will sample parameters/architecture by understanding the search space first.
User should define the name of variable, type and candidate value of variable. User should define the name of variable, type and candidate value of variable.
...@@ -69,6 +69,6 @@ The candidate type and value for variable is here: ...@@ -69,6 +69,6 @@ The candidate type and value for variable is here:
Note that SMAC only supports a subset of the types above, including `choice`, `randint`, `uniform`, `loguniform`, `quniform(q=1)`. In the current version, SMAC does not support cascaded search space (i.e., conditional variable in SMAC). Note that SMAC only supports a subset of the types above, including `choice`, `randint`, `uniform`, `loguniform`, `quniform(q=1)`. In the current version, SMAC does not support cascaded search space (i.e., conditional variable in SMAC).
Note that GridSearch Tuner only supports a subset of the types above, including `choic`, `quniform` and `qloguniform`, where q here specifies the number of values that will be sampled. Details about the last two type as follows Note that GridSearch Tuner only supports a subset of the types above, including `choice`, `quniform` and `qloguniform`, where q here specifies the number of values that will be sampled. Details about the last two type as follows
* Type 'quniform' will receive three values [low, high, q], where [low, high] specifies a range and 'q' specifies the number of values that will be sampled evenly. Note that q should be at least 2. It will be sampled in a way that the first sampled value is 'low', and each of the following values is (high-low)/q larger that the value in front of it. * Type 'quniform' will receive three values [low, high, q], where [low, high] specifies a range and 'q' specifies the number of values that will be sampled evenly. Note that q should be at least 2. It will be sampled in a way that the first sampled value is 'low', and each of the following values is (high-low)/q larger that the value in front of it.
* Type 'qloguniform' behaves like 'quniform' except that it will first change the range to [log(low), log(high)] and sample and then change the sampled value back. * Type 'qloguniform' behaves like 'quniform' except that it will first change the range to [log(low), log(high)] and sample and then change the sampled value back.
\ No newline at end of file
**Set up NNI developer environment** **Set up NNI developer environment**
=== ===
## Best practice for debug NNI source code ## Best practice for debug NNI source code
For debugging NNI source code, your development environment should be under Ubuntu 16.04 (or above) system with python 3 and pip 3 installed, then follow the below steps. For debugging NNI source code, your development environment should be under Ubuntu 16.04 (or above) system with python 3 and pip 3 installed, then follow the below steps.
...@@ -7,42 +9,52 @@ For debugging NNI source code, your development environment should be under Ubun ...@@ -7,42 +9,52 @@ For debugging NNI source code, your development environment should be under Ubun
**1. Clone the source code** **1. Clone the source code**
Run the command Run the command
``` ```
git clone https://github.com/Microsoft/nni.git git clone https://github.com/Microsoft/nni.git
``` ```
to clone the source code to clone the source code
**2. Prepare the debug environment and install dependencies** **2. Prepare the debug environment and install dependencies**
Change directory to the source code folder, then run the command Change directory to the source code folder, then run the command
``` ```
make install-dependencies make install-dependencies
``` ```
to install the dependent tools for the environment to install the dependent tools for the environment
**3. Build source code** **3. Build source code**
Run the command Run the command
``` ```
make build make build
``` ```
to build the source code to build the source code
**4. Install NNI to development environment** **4. Install NNI to development environment**
Run the command Run the command
``` ```
make dev-install make dev-install
``` ```
to install the distribution content to development environment, and create cli scripts to install the distribution content to development environment, and create cli scripts
**5. Check if the environment is ready** **5. Check if the environment is ready**
Now, you can try to start an experiment to check if your environment is ready. Now, you can try to start an experiment to check if your environment is ready.
For example, run the command For example, run the command
``` ```
nnictl create --config ~/nni/examples/trials/mnist/config.yml nnictl create --config ~/nni/examples/trials/mnist/config.yml
``` ```
And open WebUI to check if everything is OK And open WebUI to check if everything is OK
**6. Redeploy** **6. Redeploy**
......
...@@ -2,7 +2,9 @@ How to start an experiment ...@@ -2,7 +2,9 @@ How to start an experiment
=== ===
## 1.Introduce ## 1.Introduce
There are few steps to start an new experiment of nni, here are the process. There are few steps to start an new experiment of nni, here are the process.
<img src="./img/experiment_process.jpg" width="50%" height="50%" /> <img src="./img/experiment_process.jpg" width="50%" height="50%" />
## 2.Details ## 2.Details
### 2.1 Check environment ### 2.1 Check environment
1. Check if there is an old experiment running 1. Check if there is an old experiment running
...@@ -20,7 +22,7 @@ Check whether restful server process is successfully started and could get a res ...@@ -20,7 +22,7 @@ Check whether restful server process is successfully started and could get a res
Call restful server to set experiment config before starting an experiment, experiment config includes the config values in config yaml file. Call restful server to set experiment config before starting an experiment, experiment config includes the config values in config yaml file.
### 2.5 Check experiment cofig ### 2.5 Check experiment cofig
Check the response content of restful srver, if the status code of response is 200, the config is successfully set. Check the response content of restful server, if the status code of response is 200, the config is successfully set.
### 2.6 Start Experiment ### 2.6 Start Experiment
Call restful server process to setup an experiment. Call restful server process to setup an experiment.
......
...@@ -46,4 +46,4 @@ Click the tab "Trials Detail" to see the status of the all trials. Specifically: ...@@ -46,4 +46,4 @@ Click the tab "Trials Detail" to see the status of the all trials. Specifically:
* Support to search for a specific trial. * Support to search for a specific trial.
* Intermediate Result Graph. * Intermediate Result Graph.
![](./img/intermediate.png) ![](./img/intermediate.png)
\ No newline at end of file
...@@ -6,10 +6,12 @@ A **Trial** in NNI is an individual attempt at applying a set of parameters on a ...@@ -6,10 +6,12 @@ A **Trial** in NNI is an individual attempt at applying a set of parameters on a
To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`. To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`.
## NNI API ## NNI API
>Step 1 - Prepare a SearchSpace parameters file. >Step 1 - Prepare a SearchSpace parameters file.
An example is shown below: An example is shown below:
```
```json
{ {
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]}, "dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
"conv_size":{"_type":"choice","_value":[2,3,5,7]}, "conv_size":{"_type":"choice","_value":[2,3,5,7]},
...@@ -17,9 +19,11 @@ An example is shown below: ...@@ -17,9 +19,11 @@ An example is shown below:
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]} "learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
} }
``` ```
Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search space.
Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search space.
>Step 2 - Update model codes >Step 2 - Update model codes
~~~~ ~~~~
2.1 Declare NNI API 2.1 Declare NNI API
Include `import nni` in your trial code to use NNI APIs. Include `import nni` in your trial code to use NNI APIs.
...@@ -34,20 +38,21 @@ Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search spa ...@@ -34,20 +38,21 @@ Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search spa
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029} {"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
2.3 Report NNI results 2.3 Report NNI results
Use the API: Use the API:
`nni.report_intermediate_result(accuracy)`
`nni.report_intermediate_result(accuracy)`
to send `accuracy` to assessor. to send `accuracy` to assessor.
Use the API: Use the API:
`nni.report_final_result(accuracy)` `nni.report_final_result(accuracy)`
to send `accuracy` to tuner. to send `accuracy` to tuner.
~~~~ ~~~~
**NOTE**: **NOTE**:
~~~~ ~~~~
accuracy - The `accuracy` could be any python object, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int). accuracy - The `accuracy` could be any python object, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int).
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial). assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial).
...@@ -63,16 +68,17 @@ useAnnotation: false ...@@ -63,16 +68,17 @@ useAnnotation: false
searchSpacePath: /path/to/your/search_space.json searchSpacePath: /path/to/your/search_space.json
``` ```
You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations. You can refer to [here](./ExperimentConfig.md) for more information about how to set up experiment configurations.
(../examples/trials/README.md) for more information about how to write trial code using NNI APIs. You can refer to [here](../examples/trials/README.md) for more information about how to write trial code using NNI APIs.
## NNI Python Annotation ## NNI Python Annotation
An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to: An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to:
* annotate the variables you want to tune * annotate the variables you want to tune
* specify in which range you want to tune the variables * specify in which range you want to tune the variables
* annotate which variable you want to report as intermediate result to `assessor` * annotate which variable you want to report as intermediate result to `assessor`
* annotate which variable you want to report as the final result (e.g. model accuracy) to `tuner`. * annotate which variable you want to report as the final result (e.g. model accuracy) to `tuner`.
Again, take MNIST as an example, it only requires 2 steps to write a trial with NNI Annotation. Again, take MNIST as an example, it only requires 2 steps to write a trial with NNI Annotation.
...@@ -113,14 +119,16 @@ with tf.Session() as sess: ...@@ -113,14 +119,16 @@ with tf.Session() as sess:
>> >>
>>`@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line. >>`@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line.
>> >>
>>Please refer to [Annotation README](../tools/nni_annotation/README.md) for more information about annotation syntax and its usage. >>Please refer to [Annotation README](../tools/nni_annotation/README.md) for more information about annotation syntax and its usage.
>Step 2 - Enable NNI Annotation >Step 2 - Enable NNI Annotation
In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation: In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation:
```
```yaml
useAnnotation: true useAnnotation: true
``` ```
## More Trial Example ## More Trial Example
* [Automatic Model Architecture Search for Reading Comprehension.](../examples/trials/ga_squad/README.md) * [Automatic Model Architecture Search for Reading Comprehension.](../examples/trials/ga_squad/README.md)
...@@ -4,13 +4,14 @@ ...@@ -4,13 +4,14 @@
So, if user want to implement a customized Tuner, she/he only need to: So, if user want to implement a customized Tuner, she/he only need to:
1) Inherit a tuner of a base Tuner class 1. Inherit a tuner of a base Tuner class
2) Implement receive_trial_result and generate_parameter function 1. Implement receive_trial_result and generate_parameter function
3) Configure your customized tuner in experiment yaml config file 1. Configure your customized tuner in experiment yaml config file
Here ia an example: Here is an example:
**1) Inherit a tuner of a base Tuner class** **1) Inherit a tuner of a base Tuner class**
```python ```python
from nni.tuner import Tuner from nni.tuner import Tuner
...@@ -20,13 +21,14 @@ class CustomizedTuner(Tuner): ...@@ -20,13 +21,14 @@ class CustomizedTuner(Tuner):
``` ```
**2) Implement receive_trial_result and generate_parameter function** **2) Implement receive_trial_result and generate_parameter function**
```python ```python
from nni.tuner import Tuner from nni.tuner import Tuner
class CustomizedTuner(Tuner): class CustomizedTuner(Tuner):
def __init__(self, ...): def __init__(self, ...):
... ...
def receive_trial_result(self, parameter_id, parameters, value): def receive_trial_result(self, parameter_id, parameters, value):
''' '''
Record an observation of the objective function and Train Record an observation of the objective function and Train
...@@ -36,7 +38,7 @@ class CustomizedTuner(Tuner): ...@@ -36,7 +38,7 @@ class CustomizedTuner(Tuner):
''' '''
# your code implements here. # your code implements here.
... ...
def generate_parameters(self, parameter_id): def generate_parameters(self, parameter_id):
''' '''
Returns a set of trial (hyper-)parameters, as a serializable object Returns a set of trial (hyper-)parameters, as a serializable object
...@@ -46,12 +48,14 @@ class CustomizedTuner(Tuner): ...@@ -46,12 +48,14 @@ class CustomizedTuner(Tuner):
return your_parameters return your_parameters
... ...
``` ```
```receive_trial_result``` will receive ```the parameter_id, parameters, value``` as parameters input. Also, Tuner will receive the ```value``` object are exactly same value that Trial send.
The ```your_parameters``` return from ```generate_parameters``` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same ```your_parameters``` from Tuner. `receive_trial_result` will receive the `parameter_id, parameters, value` as parameters input. Also, Tuner will receive the `value` object are exactly same value that Trial send.
The `your_parameters` return from `generate_parameters` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same `your_parameters` from Tuner.
For example: For example:
If the you implement the ```generate_parameters``` like this: If the you implement the `generate_parameters` like this:
```python ```python
def generate_parameters(self, parameter_id): def generate_parameters(self, parameter_id):
''' '''
...@@ -61,23 +65,28 @@ If the you implement the ```generate_parameters``` like this: ...@@ -61,23 +65,28 @@ If the you implement the ```generate_parameters``` like this:
# your code implements here. # your code implements here.
return {"dropout": 0.3, "learning_rate": 0.4} return {"dropout": 0.3, "learning_rate": 0.4}
``` ```
It means your Tuner will always generate parameters ```{"dropout": 0.3, "learning_rate": 0.4}```. Then Trial will receive ```{"dropout": 0.3, "learning_rate": 0.4}``` by calling API ```nni.get_next_parameter()```. Once the trial ends with a result (normally some kind of metrics), it can send the result to Tuner by calling API ```nni.report_final_result()```, for example ```nni.report_final_result(0.93)```. Then your Tuner's ```receive_trial_result``` function will receied the result like:
``` It means your Tuner will always generate parameters `{"dropout": 0.3, "learning_rate": 0.4}`. Then Trial will receive `{"dropout": 0.3, "learning_rate": 0.4}` by calling API `nni.get_next_parameter()`. Once the trial ends with a result (normally some kind of metrics), it can send the result to Tuner by calling API `nni.report_final_result()`, for example `nni.report_final_result(0.93)`. Then your Tuner's `receive_trial_result` function will receied the result like:
```python
parameter_id = 82347 parameter_id = 82347
parameters = {"dropout": 0.3, "learning_rate": 0.4} parameters = {"dropout": 0.3, "learning_rate": 0.4}
value = 0.93 value = 0.93
``` ```
**Note that** if you want to access a file (e.g., ```data.txt```) in the directory of your own tuner, you cannot use ```open('data.txt', 'r')```. Instead, you should use the following: **Note that** if you want to access a file (e.g., `data.txt`) in the directory of your own tuner, you cannot use `open('data.txt', 'r')`. Instead, you should use the following:
```
```python
_pwd = os.path.dirname(__file__) _pwd = os.path.dirname(__file__)
_fd = open(os.path.join(_pwd, 'data.txt'), 'r') _fd = open(os.path.join(_pwd, 'data.txt'), 'r')
``` ```
This is because your tuner is not executed in the directory of your tuner (i.e., ```pwd``` is not the directory of your own tuner).
This is because your tuner is not executed in the directory of your tuner (i.e., `pwd` is not the directory of your own tuner).
**3) Configure your customized tuner in experiment yaml config file** **3) Configure your customized tuner in experiment yaml config file**
NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor. NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor.
```yaml ```yaml
tuner: tuner:
codeDir: /home/abc/mytuner codeDir: /home/abc/mytuner
...@@ -90,9 +99,11 @@ tuner: ...@@ -90,9 +99,11 @@ tuner:
``` ```
More detail example you could see: More detail example you could see:
> * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner) > * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner) > * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner) > * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner)
## Write a more advanced automl algorithm ## Write a more advanced automl algorithm
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](../src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](./howto_3_CustomizedAdvisor.md) for how to write a customized advisor.
The information above are usually enough to write a general tuner. However, users may also want more information, for example, intermediate results, trials' state (e.g., the information in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](../src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](./howto_3_CustomizedAdvisor.md) for how to write a customized advisor.
# **How To** - Customize Your Own Advisor # **How To** - Customize Your Own Advisor
*Advisor targets the scenario that the automl algorithm wants the methods of both tuner and assessor. Advisor is similar to tuner on that it receives trial configuration request, final results, and generate trial configurations. Also, it is similar to assessor on that it receives intermediate results, trial's end state, and could send trial kill command. Note that, if you use Advisor, tuner and assessor are not allowed to be used at the same time.* *Advisor targets the scenario that the automl algorithm wants the methods of both tuner and assessor. Advisor is similar to tuner on that it receives trial parameters request, final results, and generate trial parameters. Also, it is similar to assessor on that it receives intermediate results, trial's end state, and could send trial kill command. Note that, if you use Advisor, tuner and assessor are not allowed to be used at the same time.*
So, if user want to implement a customized Advisor, she/he only need to: So, if user want to implement a customized Advisor, she/he only need to:
1) Define an Advisor inheriting from the MsgDispatcherBase class 1. Define an Advisor inheriting from the MsgDispatcherBase class
2) Implement the methods with prefix `handle_` except `handle_request` 1. Implement the methods with prefix `handle_` except `handle_request`
3) Configure your customized Advisor in experiment yaml config file 1. Configure your customized Advisor in experiment yaml config file
Here ia an example: Here is an example:
**1) Define an Advisor inheriting from the MsgDispatcherBase class** **1) Define an Advisor inheriting from the MsgDispatcherBase class**
```python ```python
from nni.msg_dispatcher_base import MsgDispatcherBase from nni.msg_dispatcher_base import MsgDispatcherBase
......
...@@ -5,17 +5,17 @@ In this tutorial, we will use the example in [~/examples/trials/mnist] to explai ...@@ -5,17 +5,17 @@ In this tutorial, we will use the example in [~/examples/trials/mnist] to explai
>Before starts >Before starts
You have an implementation for MNIST classifer using convolutional layers, the Python code is in `mnist_before.py`. You have an implementation for MNIST classifer using convolutional layers, the Python code is in `mnist_before.py`.
>Step 1 - Update model codes >Step 1 - Update model codes
To enable NNI API, make the following changes: To enable NNI API, make the following changes:
~~~~ ~~~~
1.1 Declare NNI API 1.1 Declare NNI API
Include `import nni` in your trial code to use NNI APIs. Include `import nni` in your trial code to use NNI APIs.
1.2 Get predefined parameters 1.2 Get predefined parameters
Use the following code snippet: Use the following code snippet:
RECEIVED_PARAMS = nni.get_next_parameter() RECEIVED_PARAMS = nni.get_next_parameter()
...@@ -83,9 +83,9 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install ...@@ -83,9 +83,9 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install
python ~/nni/examples/trials/mnist-annotation/mnist.py python ~/nni/examples/trials/mnist-annotation/mnist.py
This command will be filled in the yaml configure file below. Please refer to [here](howto_1_WriteTrial) for how to write your own trial. This command will be filled in the yaml configure file below. Please refer to [here](./howto_1_WriteTrial.md) for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below: **Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](./howto_2_CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
tuner: tuner:
builtinTunerName: TPE builtinTunerName: TPE
...@@ -133,7 +133,7 @@ With all these steps done, we can run the experiment with the following command: ...@@ -133,7 +133,7 @@ With all these steps done, we can run the experiment with the following command:
You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command line tool. You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command line tool.
## View experiment results ## View experiment results
The experiment has been running now. Oher than *nnictl*, NNI also provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The experiment has been running now. Other than *nnictl*, NNI also provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features.
## Using multiple local GPUs to speed up search ## Using multiple local GPUs to speed up search
The following steps assume that you have 4 NVIDIA GPUs installed at local and [tensorflow with GPU support](https://www.tensorflow.org/install/gpu). The demo enables 4 concurrent trail jobs and each trail job uses 1 GPU. The following steps assume that you have 4 NVIDIA GPUs installed at local and [tensorflow with GPU support](https://www.tensorflow.org/install/gpu). The demo enables 4 concurrent trail jobs and each trail job uses 1 GPU.
......
...@@ -12,12 +12,12 @@ NNI provides an easy to adopt approach to set up parameter tuning algorithms as ...@@ -12,12 +12,12 @@ NNI provides an easy to adopt approach to set up parameter tuning algorithms as
required fields: codeDirectory, classFileName, className and classArgs. required fields: codeDirectory, classFileName, className and classArgs.
### **Learn More about tuners** ### **Learn More about tuners**
* For detailed defintion and usage aobut the required field, please refer to [Config an experiment](ExperimentConfig.md) * For detailed defintion and usage about the required field, please refer to [Config an experiment](ExperimentConfig.md)
* [Tuners in the latest NNI release](HowToChooseTuner.md) * [Tuners in the latest NNI release](HowToChooseTuner.md)
* [How to implement your own tuner](howto_2_CustomizedTuner.md) * [How to implement your own tuner](howto_2_CustomizedTuner.md)
**Assessor** specifies the algorithm you use to apply early stop policy. In NNI, there are two approaches to set theassessor. **Assessor** specifies the algorithm you use to apply early stop policy. In NNI, there are two approaches to set the assessor.
1. Directly use assessor provided by nni sdk 1. Directly use assessor provided by nni sdk
required fields: builtinAssessorName and classArgs. required fields: builtinAssessorName and classArgs.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment