"profiler/vscode:/vscode.git/clone" did not exist on "b79df7712e1917d0e697fac7b701338af5c08814"
Unverified Commit 1d6db235 authored by fishyds's avatar fishyds Committed by GitHub
Browse files

Merge branch 'master' into v0.5

parents 3c1ff6cc efa479b0
...@@ -67,3 +67,5 @@ typings/ ...@@ -67,3 +67,5 @@ typings/
__pycache__ __pycache__
build build
*.egg-info *.egg-info
.vscode
...@@ -134,4 +134,3 @@ We are in construction of the instruction for [How to Debug](docs/HowToDebug.md) ...@@ -134,4 +134,3 @@ We are in construction of the instruction for [How to Debug](docs/HowToDebug.md)
## **License** ## **License**
The entire codebase is under [MIT license](https://github.com/Microsoft/nni/blob/master/LICENSE) The entire codebase is under [MIT license](https://github.com/Microsoft/nni/blob/master/LICENSE)
...@@ -2,11 +2,14 @@ ...@@ -2,11 +2,14 @@
=== ===
## **Installation** ## **Installation**
* __Dependencies__ * __Dependencies__
```bash
python >= 3.5 python >= 3.5
git git
wget wget
```
python pip should also be correctly installed. You could use "python3 -m pip -v" to check in Linux. python pip should also be correctly installed. You could use "python3 -m pip -v" to check in Linux.
...@@ -14,15 +17,20 @@ ...@@ -14,15 +17,20 @@
* __Install NNI through pip__ * __Install NNI through pip__
```bash
python3 -m pip install --user --upgrade nni python3 -m pip install --user --upgrade nni
```
* __Install NNI through source code__ * __Install NNI through source code__
```bash
git clone -b v0.5 https://github.com/Microsoft/nni.git git clone -b v0.5 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
```
## **Quick start: run a customized experiment** ## **Quick start: run a customized experiment**
An experiment is to run multiple trial jobs, each trial job tries a configuration which includes a specific neural architecture (or model) and hyper-parameter values. To run an experiment through NNI, you should: An experiment is to run multiple trial jobs, each trial job tries a configuration which includes a specific neural architecture (or model) and hyper-parameter values. To run an experiment through NNI, you should:
* Provide a runnable trial * Provide a runnable trial
...@@ -32,22 +40,26 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio ...@@ -32,22 +40,26 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio
**Prepare trial**: Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example: **Prepare trial**: Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example:
python3 ~/nni/examples/trials/mnist-annotation/mnist.py ```bash
python3 ~/nni/examples/trials/mnist-annotation/mnist.py
```
This command will be filled in the yaml configure file below. Please refer to [here](howto_1_WriteTrial.md) for how to write your own trial. This command will be filled in the yaml configure file below. Please refer to [here](howto_1_WriteTrial.md) for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](howto_2_CustomizedTuner.md), but for simplicity, here we choose a tuner provided by NNI as below: **Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](howto_2_CustomizedTuner.md), but for simplicity, here we choose a tuner provided by NNI as below:
tuner: ```yaml
tuner:
builtinTunerName: TPE builtinTunerName: TPE
classArgs: classArgs:
optimize_mode: maximize optimize_mode: maximize
```
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner, *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. *builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner, *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below: **Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
``` ```yaml
authorName: your_name authorName: your_name
experimentName: auto_mnist experimentName: auto_mnist
...@@ -87,6 +99,7 @@ You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command l ...@@ -87,6 +99,7 @@ You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command l
The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`. The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`.
## Read more ## Read more
* [Tuners supported in the latest NNI release](./HowToChooseTuner.md) * [Tuners supported in the latest NNI release](./HowToChooseTuner.md)
* [Overview](Overview.md) * [Overview](Overview.md)
* [Installation](Installation.md) * [Installation](Installation.md)
......
...@@ -6,21 +6,27 @@ Currently we only support installation on Linux & Mac. ...@@ -6,21 +6,27 @@ Currently we only support installation on Linux & Mac.
## **Installation** ## **Installation**
* __Dependencies__ * __Dependencies__
```bash
python >= 3.5 python >= 3.5
git git
wget wget
```
python pip should also be correctly installed. You could use "python3 -m pip -v" to check pip version. python pip should also be correctly installed. You could use "python3 -m pip -v" to check pip version.
* __Install NNI through pip__ * __Install NNI through pip__
```bash
python3 -m pip install --user --upgrade nni python3 -m pip install --user --upgrade nni
```
* __Install NNI through source code__ * __Install NNI through source code__
```bash
git clone -b v0.5 https://github.com/Microsoft/nni.git git clone -b v0.5 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
```
* __Install NNI in docker image__ * __Install NNI in docker image__
...@@ -52,8 +58,8 @@ Below are the minimum system requirements for NNI on macOS. Due to potential pro ...@@ -52,8 +58,8 @@ Below are the minimum system requirements for NNI on macOS. Due to potential pro
|**Internet**|Boardband internet connection| |**Internet**|Boardband internet connection|
|**Resolution**|1024 x 768 minimum display resolution| |**Resolution**|1024 x 768 minimum display resolution|
## Further reading ## Further reading
* [Overview](Overview.md) * [Overview](Overview.md)
* [Use command line tool nnictl](NNICTLDOC.md) * [Use command line tool nnictl](NNICTLDOC.md)
* [Use NNIBoard](WebUI.md) * [Use NNIBoard](WebUI.md)
......
...@@ -43,7 +43,7 @@ kubeflowConfig: ...@@ -43,7 +43,7 @@ kubeflowConfig:
``` ```
If users want to use tf-operator, he could set `ps` and `worker` in trial config. If users want to use pytorch-operator, he could set `master` and `worker` in trial config. If users want to use tf-operator, he could set `ps` and `worker` in trial config. If users want to use pytorch-operator, he could set `master` and `worker` in trial config.
## Supported sotrage type ## Supported storage type
NNI support NFS and Azure Storage to store the code and output files, users could set storage type in config file and set the corresponding config. NNI support NFS and Azure Storage to store the code and output files, users could set storage type in config file and set the corresponding config.
The setting for NFS storage are as follows: The setting for NFS storage are as follows:
``` ```
...@@ -197,4 +197,3 @@ Notice: In kubeflow mode, NNIManager will start a rest server and listen on a po ...@@ -197,4 +197,3 @@ Notice: In kubeflow mode, NNIManager will start a rest server and listen on a po
Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information. Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information.
Any problems when using NNI in kubeflow mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni), or send mail to nni@microsoft.com Any problems when using NNI in kubeflow mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni), or send mail to nni@microsoft.com
nnictl nnictl
=== ===
## Introduction ## Introduction
__nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. __nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc.
## Commands ## Commands
nnictl support commands: nnictl support commands:
```
```bash
nnictl create nnictl create
nnictl stop nnictl stop
nnictl update nnictl update
...@@ -19,21 +24,22 @@ nnictl tensorboard ...@@ -19,21 +24,22 @@ nnictl tensorboard
nnictl top nnictl top
nnictl --version nnictl --version
``` ```
### Manage an experiment ### Manage an experiment
* __nnictl create__ * __nnictl create__
* Description * Description
You can use this command to create a new experiment, using the configuration specified in config file. You can use this command to create a new experiment, using the configuration specified in config file. After this command is successfully done, the context will be set as this experiment, which means the following command you issued is associated with this experiment, unless you explicitly changes the context(not supported yet).
After this command is successfully done, the context will be set as this experiment,
which means the following command you issued is associated with this experiment,
unless you explicitly changes the context(not supported yet).
* Usage * Usage
```bash
nnictl create [OPTIONS] nnictl create [OPTIONS]
```
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| --config, -c| True| |yaml configure file of the experiment| | --config, -c| True| |yaml configure file of the experiment|
...@@ -47,7 +53,10 @@ nnictl --version ...@@ -47,7 +53,10 @@ nnictl --version
* Usage * Usage
```bash
nnictl resume [OPTIONS] nnictl resume [OPTIONS]
```
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
...@@ -55,9 +64,6 @@ nnictl --version ...@@ -55,9 +64,6 @@ nnictl --version
| id| False| |The id of the experiment you want to resume| | id| False| |The id of the experiment you want to resume|
| --port, -p| False| |Rest port of the experiment you want to resume| | --port, -p| False| |Rest port of the experiment you want to resume|
* __nnictl stop__ * __nnictl stop__
* Description * Description
...@@ -65,16 +71,23 @@ nnictl --version ...@@ -65,16 +71,23 @@ nnictl --version
* Usage * Usage
```bash
nnictl stop [id] nnictl stop [id]
```
* Detail * Detail
1.If there is an id specified, and the id matches the running experiment, nnictl will stop the corresponding experiment, or will print error message. 1. If there is an id specified, and the id matches the running experiment, nnictl will stop the corresponding experiment, or will print error message.
2.If there is no id specified, and there is an experiment running, stop the running experiment, or print error message.
3.If the id ends with *, nnictl will stop all experiments whose ids matchs the regular. 2. If there is no id specified, and there is an experiment running, stop the running experiment, or print error message.
4.If the id does not exist but match the prefix of an experiment id, nnictl will stop the matched experiment.
5.If the id does not exist but match multiple prefix of the experiment ids, nnictl will give id information. 3. If the id ends with *, nnictl will stop all experiments whose ids matchs the regular.
6.Users could use 'nnictl stop all' to stop all experiments
4. If the id does not exist but match the prefix of an experiment id, nnictl will stop the matched experiment.
5. If the id does not exist but match multiple prefix of the experiment ids, nnictl will give id information.
6. Users could use 'nnictl stop all' to stop all experiments
* __nnictl update__ * __nnictl update__
...@@ -85,7 +98,9 @@ nnictl --version ...@@ -85,7 +98,9 @@ nnictl --version
* Usage * Usage
```bash
nnictl update searchspace [OPTIONS] nnictl update searchspace [OPTIONS]
```
Options: Options:
...@@ -101,7 +116,9 @@ nnictl --version ...@@ -101,7 +116,9 @@ nnictl --version
* Usage * Usage
```bash
nnictl update concurrency [OPTIONS] nnictl update concurrency [OPTIONS]
```
Options: Options:
...@@ -113,11 +130,13 @@ nnictl --version ...@@ -113,11 +130,13 @@ nnictl --version
* __nnictl update duration__ * __nnictl update duration__
* Description * Description
You can use this command to update an experiment's concurrency. You can use this command to update an experiment's duration.
* Usage * Usage
```bash
nnictl update duration [OPTIONS] nnictl update duration [OPTIONS]
```
Options: Options:
...@@ -133,8 +152,9 @@ nnictl --version ...@@ -133,8 +152,9 @@ nnictl --version
* Usage * Usage
```bash
nnictl update trialnum [OPTIONS] nnictl update trialnum [OPTIONS]
```
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
...@@ -142,17 +162,19 @@ nnictl --version ...@@ -142,17 +162,19 @@ nnictl --version
| id| False| |ID of the experiment you want to set| | id| False| |ID of the experiment you want to set|
| --value, -v| True| |the new number of maxtrialnum you want to set| | --value, -v| True| |the new number of maxtrialnum you want to set|
* __nnictl trial__ * __nnictl trial__
* __nnictl trial ls__ * __nnictl trial ls__
* Description * Description
You can use this command to show trial's information. You can use this command to show trial's information.
* Usage * Usage
```bash
nnictl trial ls nnictl trial ls
```
Options: Options:
...@@ -164,10 +186,12 @@ nnictl --version ...@@ -164,10 +186,12 @@ nnictl --version
* Description * Description
You can use this command to kill a trial job. You can use this command to kill a trial job.
* Usage * Usage
```bash
nnictl trial kill [OPTIONS] nnictl trial kill [OPTIONS]
```
Options: Options:
| Name, shorthand | Required|Default | Description | | Name, shorthand | Required|Default | Description |
...@@ -183,7 +207,9 @@ nnictl --version ...@@ -183,7 +207,9 @@ nnictl --version
* Usage * Usage
```bash
nnictl top nnictl top
```
Options: Options:
...@@ -192,17 +218,19 @@ nnictl --version ...@@ -192,17 +218,19 @@ nnictl --version
| id| False| |ID of the experiment you want to set| | id| False| |ID of the experiment you want to set|
| --time, -t| False| |The interval to update the experiment status, the unit of time is second, and the default value is 3 second.| | --time, -t| False| |The interval to update the experiment status, the unit of time is second, and the default value is 3 second.|
### Manage experiment information ### Manage experiment information
* __nnictl experiment show__ * __nnictl experiment show__
* Description * Description
Show the information of experiment. Show the information of experiment.
* Usage * Usage
```bash
nnictl experiment show nnictl experiment show
```
Options: Options:
...@@ -210,14 +238,17 @@ nnictl --version ...@@ -210,14 +238,17 @@ nnictl --version
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| | id| False| |ID of the experiment you want to set|
* __nnictl experiment status__ * __nnictl experiment status__
* Description * Description
Show the status of experiment. Show the status of experiment.
* Usage * Usage
```bash
nnictl experiment status nnictl experiment status
```
Options: Options:
...@@ -225,14 +256,16 @@ nnictl --version ...@@ -225,14 +256,16 @@ nnictl --version
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| | id| False| |ID of the experiment you want to set|
* __nnictl experiment list__ * __nnictl experiment list__
* Description * Description
Show the information of all the (running) experiments. Show the information of all the (running) experiments.
* Usage * Usage
```bash
nnictl experiment list nnictl experiment list
```
Options: Options:
...@@ -240,8 +273,6 @@ nnictl --version ...@@ -240,8 +273,6 @@ nnictl --version
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| all| False| False|Show all of experiments, including stopped experiments.| | all| False| False|Show all of experiments, including stopped experiments.|
* __nnictl config show__ * __nnictl config show__
* Description * Description
...@@ -249,18 +280,23 @@ nnictl --version ...@@ -249,18 +280,23 @@ nnictl --version
* Usage * Usage
```bash
nnictl config show nnictl config show
```
### Manage log ### Manage log
* __nnictl log stdout__ * __nnictl log stdout__
* Description * Description
Show the stdout log content. Show the stdout log content.
* Usage * Usage
```bash
nnictl log stdout [options] nnictl log stdout [options]
```
Options: Options:
...@@ -271,7 +307,6 @@ nnictl --version ...@@ -271,7 +307,6 @@ nnictl --version
| --tail, -t| False| |show tail lines of stdout| | --tail, -t| False| |show tail lines of stdout|
| --path, -p| False| |show the path of stdout file| | --path, -p| False| |show the path of stdout file|
* __nnictl log stderr__ * __nnictl log stderr__
* Description * Description
...@@ -279,7 +314,9 @@ nnictl --version ...@@ -279,7 +314,9 @@ nnictl --version
* Usage * Usage
```bash
nnictl log stderr [options] nnictl log stderr [options]
```
Options: Options:
...@@ -290,7 +327,6 @@ nnictl --version ...@@ -290,7 +327,6 @@ nnictl --version
| --tail, -t| False| |show tail lines of stderr| | --tail, -t| False| |show tail lines of stderr|
| --path, -p| False| |show the path of stderr file| | --path, -p| False| |show the path of stderr file|
* __nnictl log trial__ * __nnictl log trial__
* Description * Description
...@@ -298,7 +334,9 @@ nnictl --version ...@@ -298,7 +334,9 @@ nnictl --version
* Usage * Usage
```bash
nnictl log trial [options] nnictl log trial [options]
```
Options: Options:
...@@ -306,16 +344,19 @@ nnictl --version ...@@ -306,16 +344,19 @@ nnictl --version
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| id| False| |the id of trial| | id| False| |the id of trial|
### Manage webui ### Manage webui
* __nnictl webui url__ * __nnictl webui url__
* Description * Description
Show the urls of the experiment. Show the urls of the experiment.
* Usage * Usage
```bash
nnictl webui url nnictl webui url
```
Options: Options:
...@@ -323,16 +364,19 @@ nnictl --version ...@@ -323,16 +364,19 @@ nnictl --version
| ------ | ------ | ------ |------ | | ------ | ------ | ------ |------ |
| id| False| |ID of the experiment you want to set| | id| False| |ID of the experiment you want to set|
### Manage tensorboard ### Manage tensorboard
* __nnictl tensorboard start__ * __nnictl tensorboard start__
* Description * Description
Start the tensorboard process. Start the tensorboard process.
* Usage * Usage
```bash
nnictl tensorboard start nnictl tensorboard start
```
Options: Options:
...@@ -357,7 +401,9 @@ nnictl --version ...@@ -357,7 +401,9 @@ nnictl --version
* Usage * Usage
```bash
nnictl tensorboard stop nnictl tensorboard stop
```
Options: Options:
...@@ -368,10 +414,13 @@ nnictl --version ...@@ -368,10 +414,13 @@ nnictl --version
### Check nni version ### Check nni version
* __nnictl --version__ * __nnictl --version__
* Description * Description
Describe the current version of nni installed. Describe the current version of nni installed.
* Usage * Usage
```bash
nnictl --version nnictl --version
```
...@@ -22,7 +22,7 @@ After user submits the experiment through a command line tool [nnictl](../tools/ ...@@ -22,7 +22,7 @@ After user submits the experiment through a command line tool [nnictl](../tools/
User can use the nnictl and/or a visualized Web UI nniboard to monitor and debug a given experiment. User can use the nnictl and/or a visualized Web UI nniboard to monitor and debug a given experiment.
NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment. NNI provides a set of examples in the package to get you familiar with the above process.
## Key Concepts ## Key Concepts
......
...@@ -7,7 +7,8 @@ Install NNI, follow the install guide [here](GetStarted.md). ...@@ -7,7 +7,8 @@ Install NNI, follow the install guide [here](GetStarted.md).
## Run an experiment ## Run an experiment
Use `examples/trials/mnist-annotation` as an example. The nni config yaml file's content is like: Use `examples/trials/mnist-annotation` as an example. The nni config yaml file's content is like:
```
```yaml
authorName: your_name authorName: your_name
experimentName: auto_mnist experimentName: auto_mnist
# how many trials could be concurrently running # how many trials could be concurrently running
...@@ -39,6 +40,7 @@ paiConfig: ...@@ -39,6 +40,7 @@ paiConfig:
passWord: your_pai_password passWord: your_pai_password
host: 10.1.1.1 host: 10.1.1.1
``` ```
Note: You should set `trainingServicePlatform: pai` in nni config yaml file if you want to start experiment in pai mode. Note: You should set `trainingServicePlatform: pai` in nni config yaml file if you want to start experiment in pai mode.
Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial configuration in pai mode have five additional keys: Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial configuration in pai mode have five additional keys:
...@@ -58,7 +60,7 @@ Once complete to fill nni experiment config file and save (for example, save as ...@@ -58,7 +60,7 @@ Once complete to fill nni experiment config file and save (for example, save as
``` ```
nnictl create --config exp_pai.yaml nnictl create --config exp_pai.yaml
``` ```
to start the experiment in pai mode. NNI will create OpanPAI job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`. to start the experiment in pai mode. NNI will create OpenPAI job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
You can see the pai jobs created by NNI in your OpenPAI cluster's web portal, like: You can see the pai jobs created by NNI in your OpenPAI cluster's web portal, like:
![](./img/nni_pai_joblist.jpg) ![](./img/nni_pai_joblist.jpg)
...@@ -77,4 +79,3 @@ You can see there're three fils in output folder: stderr, stdout, and trial.log ...@@ -77,4 +79,3 @@ You can see there're three fils in output folder: stderr, stdout, and trial.log
If you also want to save trial's other output into HDFS, like model files, you can use environment variable `NNI_OUTPUT_DIR` in your trial code to save your own output files, and NNI SDK will copy all the files in `NNI_OUTPUT_DIR` from trial's container to HDFS. If you also want to save trial's other output into HDFS, like model files, you can use environment variable `NNI_OUTPUT_DIR` in your trial code to save your own output files, and NNI SDK will copy all the files in `NNI_OUTPUT_DIR` from trial's container to HDFS.
Any problems when using NNI in pai mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni), or send mail to nni@microsoft.com Any problems when using NNI in pai mode, plesae create issues on [NNI github repo](https://github.com/Microsoft/nni), or send mail to nni@microsoft.com
**Run an Experiment on Multiple Machines** **Run an Experiment on Multiple Machines**
=== ===
NNI supports running an experiment on multiple machines through SSH channel, called `remote` mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code. NNI supports running an experiment on multiple machines through SSH channel, called `remote` mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code.
e.g. Three machines and you login in with account `bob` (Note: the account is not necessarily the same on different machine): e.g. Three machines and you login in with account `bob` (Note: the account is not necessarily the same on different machine):
...@@ -11,19 +13,24 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no ...@@ -11,19 +13,24 @@ e.g. Three machines and you login in with account `bob` (Note: the account is no
| 10.1.1.3 | bob | bob123 | | 10.1.1.3 | bob | bob123 |
## Setup NNI environment ## Setup NNI environment
Install NNI on each of your machines following the install guide [here](GetStarted.md). Install NNI on each of your machines following the install guide [here](GetStarted.md).
For remote machines that are used only to run trials but not the nnictl, you can just install python SDK: For remote machines that are used only to run trials but not the nnictl, you can just install python SDK:
* __Install python SDK through pip__ * __Install python SDK through pip__
```bash
python3 -m pip install --user --upgrade nni-sdk python3 -m pip install --user --upgrade nni-sdk
```
## Run an experiment ## Run an experiment
Install NNI on another machine which has network accessibility to those three machines above, or you can just use any machine above to run nnictl command line tool. Install NNI on another machine which has network accessibility to those three machines above, or you can just use any machine above to run nnictl command line tool.
We use `examples/trials/mnist-annotation` as an example here. `cat ~/nni/examples/trials/mnist-annotation/config_remote.yml` to see the detailed configuration file: We use `examples/trials/mnist-annotation` as an example here. `cat ~/nni/examples/trials/mnist-annotation/config_remote.yml` to see the detailed configuration file:
```
```yaml
authorName: default authorName: default
experimentName: example_mnist experimentName: example_mnist
trialConcurrency: 1 trialConcurrency: 1
...@@ -58,8 +65,11 @@ machineList: ...@@ -58,8 +65,11 @@ machineList:
username: bob username: bob
passwd: bob123 passwd: bob123
``` ```
Simply filling the `machineList` section and then run: Simply filling the `machineList` section and then run:
```
```bash
nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml
``` ```
to start the experiment. to start the experiment.
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
``` ```
The example define ```dropout_rate``` as variable which priori distribution is uniform distribution, and its value from ```0.1``` and ```0.5```. The example define `dropout_rate` as variable which priori distribution is uniform distribution, and its value from `0.1` and `0.5`.
The tuner will sample parameters/architecture by understanding the search space first. The tuner will sample parameters/architecture by understanding the search space first.
User should define the name of variable, type and candidate value of variable. User should define the name of variable, type and candidate value of variable.
...@@ -69,6 +69,6 @@ The candidate type and value for variable is here: ...@@ -69,6 +69,6 @@ The candidate type and value for variable is here:
Note that SMAC only supports a subset of the types above, including `choice`, `randint`, `uniform`, `loguniform`, `quniform(q=1)`. In the current version, SMAC does not support cascaded search space (i.e., conditional variable in SMAC). Note that SMAC only supports a subset of the types above, including `choice`, `randint`, `uniform`, `loguniform`, `quniform(q=1)`. In the current version, SMAC does not support cascaded search space (i.e., conditional variable in SMAC).
Note that GridSearch Tuner only supports a subset of the types above, including `choic`, `quniform` and `qloguniform`, where q here specifies the number of values that will be sampled. Details about the last two type as follows Note that GridSearch Tuner only supports a subset of the types above, including `choice`, `quniform` and `qloguniform`, where q here specifies the number of values that will be sampled. Details about the last two type as follows
* Type 'quniform' will receive three values [low, high, q], where [low, high] specifies a range and 'q' specifies the number of values that will be sampled evenly. Note that q should be at least 2. It will be sampled in a way that the first sampled value is 'low', and each of the following values is (high-low)/q larger that the value in front of it. * Type 'quniform' will receive three values [low, high, q], where [low, high] specifies a range and 'q' specifies the number of values that will be sampled evenly. Note that q should be at least 2. It will be sampled in a way that the first sampled value is 'low', and each of the following values is (high-low)/q larger that the value in front of it.
* Type 'qloguniform' behaves like 'quniform' except that it will first change the range to [log(low), log(high)] and sample and then change the sampled value back. * Type 'qloguniform' behaves like 'quniform' except that it will first change the range to [log(low), log(high)] and sample and then change the sampled value back.
**Set up NNI developer environment** **Set up NNI developer environment**
=== ===
## Best practice for debug NNI source code ## Best practice for debug NNI source code
For debugging NNI source code, your development environment should be under Ubuntu 16.04 (or above) system with python 3 and pip 3 installed, then follow the below steps. For debugging NNI source code, your development environment should be under Ubuntu 16.04 (or above) system with python 3 and pip 3 installed, then follow the below steps.
...@@ -7,42 +9,52 @@ For debugging NNI source code, your development environment should be under Ubun ...@@ -7,42 +9,52 @@ For debugging NNI source code, your development environment should be under Ubun
**1. Clone the source code** **1. Clone the source code**
Run the command Run the command
``` ```
git clone https://github.com/Microsoft/nni.git git clone https://github.com/Microsoft/nni.git
``` ```
to clone the source code to clone the source code
**2. Prepare the debug environment and install dependencies** **2. Prepare the debug environment and install dependencies**
Change directory to the source code folder, then run the command Change directory to the source code folder, then run the command
``` ```
make install-dependencies make install-dependencies
``` ```
to install the dependent tools for the environment to install the dependent tools for the environment
**3. Build source code** **3. Build source code**
Run the command Run the command
``` ```
make build make build
``` ```
to build the source code to build the source code
**4. Install NNI to development environment** **4. Install NNI to development environment**
Run the command Run the command
``` ```
make dev-install make dev-install
``` ```
to install the distribution content to development environment, and create cli scripts to install the distribution content to development environment, and create cli scripts
**5. Check if the environment is ready** **5. Check if the environment is ready**
Now, you can try to start an experiment to check if your environment is ready. Now, you can try to start an experiment to check if your environment is ready.
For example, run the command For example, run the command
``` ```
nnictl create --config ~/nni/examples/trials/mnist/config.yml nnictl create --config ~/nni/examples/trials/mnist/config.yml
``` ```
And open WebUI to check if everything is OK And open WebUI to check if everything is OK
**6. Redeploy** **6. Redeploy**
......
...@@ -2,7 +2,9 @@ How to start an experiment ...@@ -2,7 +2,9 @@ How to start an experiment
=== ===
## 1.Introduce ## 1.Introduce
There are few steps to start an new experiment of nni, here are the process. There are few steps to start an new experiment of nni, here are the process.
<img src="./img/experiment_process.jpg" width="50%" height="50%" /> <img src="./img/experiment_process.jpg" width="50%" height="50%" />
## 2.Details ## 2.Details
### 2.1 Check environment ### 2.1 Check environment
1. Check if there is an old experiment running 1. Check if there is an old experiment running
...@@ -20,7 +22,7 @@ Check whether restful server process is successfully started and could get a res ...@@ -20,7 +22,7 @@ Check whether restful server process is successfully started and could get a res
Call restful server to set experiment config before starting an experiment, experiment config includes the config values in config yaml file. Call restful server to set experiment config before starting an experiment, experiment config includes the config values in config yaml file.
### 2.5 Check experiment cofig ### 2.5 Check experiment cofig
Check the response content of restful srver, if the status code of response is 200, the config is successfully set. Check the response content of restful server, if the status code of response is 200, the config is successfully set.
### 2.6 Start Experiment ### 2.6 Start Experiment
Call restful server process to setup an experiment. Call restful server process to setup an experiment.
......
...@@ -6,10 +6,12 @@ A **Trial** in NNI is an individual attempt at applying a set of parameters on a ...@@ -6,10 +6,12 @@ A **Trial** in NNI is an individual attempt at applying a set of parameters on a
To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`. To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`.
## NNI API ## NNI API
>Step 1 - Prepare a SearchSpace parameters file. >Step 1 - Prepare a SearchSpace parameters file.
An example is shown below: An example is shown below:
```
```json
{ {
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]}, "dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
"conv_size":{"_type":"choice","_value":[2,3,5,7]}, "conv_size":{"_type":"choice","_value":[2,3,5,7]},
...@@ -17,9 +19,11 @@ An example is shown below: ...@@ -17,9 +19,11 @@ An example is shown below:
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]} "learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
} }
``` ```
Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search space.
Refer to [SearchSpaceSpec.md](./SearchSpaceSpec.md) to learn more about search space.
>Step 2 - Update model codes >Step 2 - Update model codes
~~~~ ~~~~
2.1 Declare NNI API 2.1 Declare NNI API
Include `import nni` in your trial code to use NNI APIs. Include `import nni` in your trial code to use NNI APIs.
...@@ -48,6 +52,7 @@ Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search spa ...@@ -48,6 +52,7 @@ Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search spa
~~~~ ~~~~
**NOTE**: **NOTE**:
~~~~ ~~~~
accuracy - The `accuracy` could be any python object, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int). accuracy - The `accuracy` could be any python object, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int).
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial). assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial).
...@@ -63,11 +68,12 @@ useAnnotation: false ...@@ -63,11 +68,12 @@ useAnnotation: false
searchSpacePath: /path/to/your/search_space.json searchSpacePath: /path/to/your/search_space.json
``` ```
You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations. You can refer to [here](./ExperimentConfig.md) for more information about how to set up experiment configurations.
(../examples/trials/README.md) for more information about how to write trial code using NNI APIs. You can refer to [here](../examples/trials/README.md) for more information about how to write trial code using NNI APIs.
## NNI Python Annotation ## NNI Python Annotation
An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to: An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to:
* annotate the variables you want to tune * annotate the variables you want to tune
* specify in which range you want to tune the variables * specify in which range you want to tune the variables
...@@ -118,9 +124,11 @@ with tf.Session() as sess: ...@@ -118,9 +124,11 @@ with tf.Session() as sess:
>Step 2 - Enable NNI Annotation >Step 2 - Enable NNI Annotation
In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation: In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation:
```
```yaml
useAnnotation: true useAnnotation: true
``` ```
## More Trial Example ## More Trial Example
* [Automatic Model Architecture Search for Reading Comprehension.](../examples/trials/ga_squad/README.md) * [Automatic Model Architecture Search for Reading Comprehension.](../examples/trials/ga_squad/README.md)
...@@ -4,13 +4,14 @@ ...@@ -4,13 +4,14 @@
So, if user want to implement a customized Tuner, she/he only need to: So, if user want to implement a customized Tuner, she/he only need to:
1) Inherit a tuner of a base Tuner class 1. Inherit a tuner of a base Tuner class
2) Implement receive_trial_result and generate_parameter function 1. Implement receive_trial_result and generate_parameter function
3) Configure your customized tuner in experiment yaml config file 1. Configure your customized tuner in experiment yaml config file
Here ia an example: Here is an example:
**1) Inherit a tuner of a base Tuner class** **1) Inherit a tuner of a base Tuner class**
```python ```python
from nni.tuner import Tuner from nni.tuner import Tuner
...@@ -20,6 +21,7 @@ class CustomizedTuner(Tuner): ...@@ -20,6 +21,7 @@ class CustomizedTuner(Tuner):
``` ```
**2) Implement receive_trial_result and generate_parameter function** **2) Implement receive_trial_result and generate_parameter function**
```python ```python
from nni.tuner import Tuner from nni.tuner import Tuner
...@@ -46,12 +48,14 @@ class CustomizedTuner(Tuner): ...@@ -46,12 +48,14 @@ class CustomizedTuner(Tuner):
return your_parameters return your_parameters
... ...
``` ```
```receive_trial_result``` will receive ```the parameter_id, parameters, value``` as parameters input. Also, Tuner will receive the ```value``` object are exactly same value that Trial send.
The ```your_parameters``` return from ```generate_parameters``` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same ```your_parameters``` from Tuner. `receive_trial_result` will receive the `parameter_id, parameters, value` as parameters input. Also, Tuner will receive the `value` object are exactly same value that Trial send.
The `your_parameters` return from `generate_parameters` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same `your_parameters` from Tuner.
For example: For example:
If the you implement the ```generate_parameters``` like this: If the you implement the `generate_parameters` like this:
```python ```python
def generate_parameters(self, parameter_id): def generate_parameters(self, parameter_id):
''' '''
...@@ -61,23 +65,28 @@ If the you implement the ```generate_parameters``` like this: ...@@ -61,23 +65,28 @@ If the you implement the ```generate_parameters``` like this:
# your code implements here. # your code implements here.
return {"dropout": 0.3, "learning_rate": 0.4} return {"dropout": 0.3, "learning_rate": 0.4}
``` ```
It means your Tuner will always generate parameters ```{"dropout": 0.3, "learning_rate": 0.4}```. Then Trial will receive ```{"dropout": 0.3, "learning_rate": 0.4}``` by calling API ```nni.get_next_parameter()```. Once the trial ends with a result (normally some kind of metrics), it can send the result to Tuner by calling API ```nni.report_final_result()```, for example ```nni.report_final_result(0.93)```. Then your Tuner's ```receive_trial_result``` function will receied the result like:
``` It means your Tuner will always generate parameters `{"dropout": 0.3, "learning_rate": 0.4}`. Then Trial will receive `{"dropout": 0.3, "learning_rate": 0.4}` by calling API `nni.get_next_parameter()`. Once the trial ends with a result (normally some kind of metrics), it can send the result to Tuner by calling API `nni.report_final_result()`, for example `nni.report_final_result(0.93)`. Then your Tuner's `receive_trial_result` function will receied the result like:
```python
parameter_id = 82347 parameter_id = 82347
parameters = {"dropout": 0.3, "learning_rate": 0.4} parameters = {"dropout": 0.3, "learning_rate": 0.4}
value = 0.93 value = 0.93
``` ```
**Note that** if you want to access a file (e.g., ```data.txt```) in the directory of your own tuner, you cannot use ```open('data.txt', 'r')```. Instead, you should use the following: **Note that** if you want to access a file (e.g., `data.txt`) in the directory of your own tuner, you cannot use `open('data.txt', 'r')`. Instead, you should use the following:
```
```python
_pwd = os.path.dirname(__file__) _pwd = os.path.dirname(__file__)
_fd = open(os.path.join(_pwd, 'data.txt'), 'r') _fd = open(os.path.join(_pwd, 'data.txt'), 'r')
``` ```
This is because your tuner is not executed in the directory of your tuner (i.e., ```pwd``` is not the directory of your own tuner).
This is because your tuner is not executed in the directory of your tuner (i.e., `pwd` is not the directory of your own tuner).
**3) Configure your customized tuner in experiment yaml config file** **3) Configure your customized tuner in experiment yaml config file**
NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor. NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor.
```yaml ```yaml
tuner: tuner:
codeDir: /home/abc/mytuner codeDir: /home/abc/mytuner
...@@ -90,9 +99,11 @@ tuner: ...@@ -90,9 +99,11 @@ tuner:
``` ```
More detail example you could see: More detail example you could see:
> * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner) > * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner) > * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner) > * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner)
## Write a more advanced automl algorithm ## Write a more advanced automl algorithm
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](../src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](./howto_3_CustomizedAdvisor.md) for how to write a customized advisor.
The information above are usually enough to write a general tuner. However, users may also want more information, for example, intermediate results, trials' state (e.g., the information in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](../src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](./howto_3_CustomizedAdvisor.md) for how to write a customized advisor.
# **How To** - Customize Your Own Advisor # **How To** - Customize Your Own Advisor
*Advisor targets the scenario that the automl algorithm wants the methods of both tuner and assessor. Advisor is similar to tuner on that it receives trial configuration request, final results, and generate trial configurations. Also, it is similar to assessor on that it receives intermediate results, trial's end state, and could send trial kill command. Note that, if you use Advisor, tuner and assessor are not allowed to be used at the same time.* *Advisor targets the scenario that the automl algorithm wants the methods of both tuner and assessor. Advisor is similar to tuner on that it receives trial parameters request, final results, and generate trial parameters. Also, it is similar to assessor on that it receives intermediate results, trial's end state, and could send trial kill command. Note that, if you use Advisor, tuner and assessor are not allowed to be used at the same time.*
So, if user want to implement a customized Advisor, she/he only need to: So, if user want to implement a customized Advisor, she/he only need to:
1) Define an Advisor inheriting from the MsgDispatcherBase class 1. Define an Advisor inheriting from the MsgDispatcherBase class
2) Implement the methods with prefix `handle_` except `handle_request` 1. Implement the methods with prefix `handle_` except `handle_request`
3) Configure your customized Advisor in experiment yaml config file 1. Configure your customized Advisor in experiment yaml config file
Here ia an example: Here is an example:
**1) Define an Advisor inheriting from the MsgDispatcherBase class** **1) Define an Advisor inheriting from the MsgDispatcherBase class**
```python ```python
from nni.msg_dispatcher_base import MsgDispatcherBase from nni.msg_dispatcher_base import MsgDispatcherBase
......
...@@ -83,9 +83,9 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install ...@@ -83,9 +83,9 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install
python ~/nni/examples/trials/mnist-annotation/mnist.py python ~/nni/examples/trials/mnist-annotation/mnist.py
This command will be filled in the yaml configure file below. Please refer to [here](howto_1_WriteTrial) for how to write your own trial. This command will be filled in the yaml configure file below. Please refer to [here](./howto_1_WriteTrial.md) for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below: **Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](./howto_2_CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
tuner: tuner:
builtinTunerName: TPE builtinTunerName: TPE
...@@ -133,7 +133,7 @@ With all these steps done, we can run the experiment with the following command: ...@@ -133,7 +133,7 @@ With all these steps done, we can run the experiment with the following command:
You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command line tool. You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command line tool.
## View experiment results ## View experiment results
The experiment has been running now. Oher than *nnictl*, NNI also provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The experiment has been running now. Other than *nnictl*, NNI also provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features.
## Using multiple local GPUs to speed up search ## Using multiple local GPUs to speed up search
The following steps assume that you have 4 NVIDIA GPUs installed at local and [tensorflow with GPU support](https://www.tensorflow.org/install/gpu). The demo enables 4 concurrent trail jobs and each trail job uses 1 GPU. The following steps assume that you have 4 NVIDIA GPUs installed at local and [tensorflow with GPU support](https://www.tensorflow.org/install/gpu). The demo enables 4 concurrent trail jobs and each trail job uses 1 GPU.
......
...@@ -12,12 +12,12 @@ NNI provides an easy to adopt approach to set up parameter tuning algorithms as ...@@ -12,12 +12,12 @@ NNI provides an easy to adopt approach to set up parameter tuning algorithms as
required fields: codeDirectory, classFileName, className and classArgs. required fields: codeDirectory, classFileName, className and classArgs.
### **Learn More about tuners** ### **Learn More about tuners**
* For detailed defintion and usage aobut the required field, please refer to [Config an experiment](ExperimentConfig.md) * For detailed defintion and usage about the required field, please refer to [Config an experiment](ExperimentConfig.md)
* [Tuners in the latest NNI release](HowToChooseTuner.md) * [Tuners in the latest NNI release](HowToChooseTuner.md)
* [How to implement your own tuner](howto_2_CustomizedTuner.md) * [How to implement your own tuner](howto_2_CustomizedTuner.md)
**Assessor** specifies the algorithm you use to apply early stop policy. In NNI, there are two approaches to set theassessor. **Assessor** specifies the algorithm you use to apply early stop policy. In NNI, there are two approaches to set the assessor.
1. Directly use assessor provided by nni sdk 1. Directly use assessor provided by nni sdk
required fields: builtinAssessorName and classArgs. required fields: builtinAssessorName and classArgs.
......
...@@ -35,7 +35,7 @@ class CustomizedAssessor(Assessor): ...@@ -35,7 +35,7 @@ class CustomizedAssessor(Assessor):
```python ```python
import argparse import argparse
import CustomizedAssesor import CustomizedAssessor
def main(): def main():
parser = argparse.ArgumentParser(description='parse command line parameters.') parser = argparse.ArgumentParser(description='parse command line parameters.')
...@@ -49,9 +49,9 @@ def main(): ...@@ -49,9 +49,9 @@ def main():
main() main()
``` ```
Please noted in 2). The object ```trial_history``` are exact the object that Trial send to Assesor by using SDK ```report_intermediate_result``` function. Please noted in 2). The object `trial_history` are exact the object that Trial send to Assessor by using SDK `report_intermediate_result` function.
Also, user could override the ```run``` function in Assessor to control the process logic. Also, user could override the `run` function in Assessor to control the process logic.
More detail example you could see: More detail example you could see:
> * [Base-Assessor](https://msrasrg.visualstudio.com/NeuralNetworkIntelligenceOpenSource/_git/Default?_a=contents&path=%2Fsrc%2Fsdk%2Fpynni%2Fnni%2Fassessor.py&version=GBadd_readme) > * [Base-Assessor](https://msrasrg.visualstudio.com/NeuralNetworkIntelligenceOpenSource/_git/Default?_a=contents&path=%2Fsrc%2Fsdk%2Fpynni%2Fnni%2Fassessor.py&version=GBadd_readme)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment