Unverified Commit 32478a1f authored by Scarlett Li's avatar Scarlett Li Committed by GitHub
Browse files

Doc refactor (#258)

* doc refactor

* image name refactor
parent 8d866b5b
...@@ -10,7 +10,7 @@ NNI (Neural Network Intelligence) is a toolkit to help users run automated machi ...@@ -10,7 +10,7 @@ NNI (Neural Network Intelligence) is a toolkit to help users run automated machi
The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud). The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud).
<p align="center"> <p align="center">
<img src="./docs/nni_overview.png" alt="drawing" width="800"/> <img src="./docs/img/nni_arch_overview.png" alt="drawing" width="800"/>
</p> </p>
## **Who should consider using NNI** ## **Who should consider using NNI**
...@@ -19,56 +19,64 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s ...@@ -19,56 +19,64 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s
* As a researcher and data scientist, you want to implement your own AutoML algorithms and compare with other algorithms * As a researcher and data scientist, you want to implement your own AutoML algorithms and compare with other algorithms
* As a ML platform owner, you want to support AutoML in your platform * As a ML platform owner, you want to support AutoML in your platform
# Get Started with NNI ## **Install & Verify**
## **Installation**
pip Installation Prerequisites
* linux (ubuntu 16.04 or newer version has been well tested)
* python >= 3.5
* git, wget
**pip install**
* We only support Linux in current stage, Ubuntu 16.04 or higher are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`, `git` and `wget`.
``` ```
python3 -m pip install -v --user git+https://github.com/Microsoft/nni.git@v0.2 python3 -m pip install -v --user git+https://github.com/Microsoft/nni.git@v0.2
source ~/.bashrc source ~/.bashrc
``` ```
## **Quick start: run your first experiment at local** **verify install**
It only requires 3 steps to start an experiment on NNI: * The following example is an experiment built on TensorFlow, make sure you have `TensorFlow installed` before running it.
![](./docs/3_steps.jpg)
NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment.
**NOTE**: The following example is an experiment built on TensorFlow, make sure you have **TensorFlow installed** before running the following command.
Try it out:
```bash ```bash
nnictl create --config ~/nni/examples/trials/mnist/config.yml nnictl create --config ~/nni/examples/trials/mnist/config.yml
``` ```
In the command output, find out the **WebUI url** and open it in your browser. You can analyze your experiment through WebUI, or browse trials' tensorboard. * In the command terminal, waiting for the message `Info: Start experiment success!` which indicates your experiment had been successfully started. You are able to explore the experiment using the `Web UI url`.
```diff
To learn more about how this example was constructed and how to analyze the experiment results in NNI WebUI, please refer to [How to write a trial run on NNI (MNIST as an example)?](docs/WriteYourTrial.md) Info: Checking experiment...
...
## **Please refer to [Get Started Tutorial](docs/GetStarted.md) for more detailed information.** Info: Starting experiment...
## More tutorials Info: Checking web ui...
Info: Starting web ui...
Info: Starting web ui success!
+ Info: Web UI url: http://127.0.0.1:8080 http://10.172.141.6:8080
+ Info: Start experiment success! The experiment id is LrNK4hae, and the restful server post is 51188.
```
* [Tutorial of NNI python annotation.](tools/nni_annotation/README.md) ## **Documentation**
* [Tuners supported by NNI.](src/sdk/pynni/nni/README.md) * [Overview](docs/Overview.md)
* [How to enable early stop (i.e. assessor) in an experiment?](docs/EnableAssessor.md) * [Get started](docs/GetStarted.md)
* [How to run an experiment on multiple machines?](docs/RemoteMachineMode.md) ## **How to**
* [Installation](docs/InstallNNI_Ubuntu.md)
* [Use command line tool nnictl](docs/NNICTLDOC.md)
* [Use NNIBoard](docs/WebUI.md)
* [Define search space](docs/SearchSpaceSpec.md)
* [Use NNI sdk] - *coming soon*
* [Config an experiment](docs/ExperimentConfig.md)
* [Use annotation]- *coming soon*
* [Debug](docs/HowToDebug.md)
## **Tutorials**
* [How to run an experiment on local (with multiple GPUs)?](docs/tutorial_1_CR_exp_local_api.md)
* [How to run an experiment on multiple machines?](docs/tutorial_2_RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](docs/PAIMode.md) * [How to run an experiment on OpenPAI?](docs/PAIMode.md)
* [How to write a customized tuner?](docs/CustomizedTuner.md) * [Try different tuners and assessors] - *coming soon*
* [How to write a customized assessor?](examples/assessors/README.md) * [How to run an experiment on K8S services?] - *coming soon*
* [How to resume an experiment?](docs/NNICTLDOC.md) * [Implement a customized tuner] - *coming soon*
* [Tutorial of the command tool *nnictl*.](docs/NNICTLDOC.md) * [Implement a customized assessor] - *coming soon*
* [How to debug in NNI](docs/HowToDebug.md) * [Implement a custmoized weight sharing algorithm] - *coming soon*
* [How to integrate NNI with your own custmoized training service] - *coming soon*
# Contributing ### **Best practice**
This project welcomes contributions and suggestions, please refer to our [contributing](./docs/CONTRIBUTING.md) document for the same. * [Compare different AutoML algorithms] - *coming soon*
* [Serve NNI as a capability of a ML Platform] - *coming soon*
## **Contribute**
This project welcomes contributions and suggestions, we are constructing the contribution guidelines, stay tuned =).
We use [GitHub issues](https://github.com/Microsoft/nni/issues) for tracking requests and bugs. We use [GitHub issues](https://github.com/Microsoft/nni/issues) for tracking requests and bugs.
# License ## **License**
The entire codebase is under [MIT license](https://github.com/Microsoft/nni/blob/master/LICENSE) The entire codebase is under [MIT license](https://github.com/Microsoft/nni/blob/master/LICENSE)
theme: jekyll-theme-leap-day theme: jekyll-theme-dinky
\ No newline at end of file \ No newline at end of file
# Introduction
For good user experience and reduce user effort, we need to design a good annotation grammar.
If users use NNI system, they only need to:
1. Annotation variable in code as:
'''@nni.variable(nni.choice(2,3,5,7),name=self.conv_size)'''
2. Annotation intermediate in code as:
'''@nni.report_intermediate_result(test_acc)'''
3. Annotation output in code as:
'''@nni.report_final_result(test_acc)'''
4. Annotation `function_choice` in code as:
'''@nni.function_choice(max_pool(h_conv1, self.pool_size),avg_pool(h_conv1, self.pool_size),name=max_pool)'''
In this way, they can easily realize automatic tuning on NNI.
For `@nni.variable`, `nni.choice` is the type of search space and there are 10 types to express your search space as follows:
1. `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)`
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
2. `@nni.variable(nni.randint(upper),name=variable)`
Which means the variable value is a random integer in the range [0, upper).
3. `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high.
4. `@nni.variable(nni.quniform(low, high, q),name=variable)`
Which means the variable value is a value like round(uniform(low, high) / q) * q
5. `@nni.variable(nni.loguniform(low, high),name=variable)`
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
6. `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q
7. `@nni.variable(nni.normal(label, mu, sigma),name=variable)`
Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma.
8. `@nni.variable(nni.qnormal(label, mu, sigma, q),name=variable)`
Which means the variable value is a value like round(normal(mu, sigma) / q) * q
9. `@nni.variable(nni.lognormal(label, mu, sigma),name=variable)`
Which means the variable value is a value drawn according to exp(normal(mu, sigma))
10. `@nni.variable(nni.qlognormal(label, mu, sigma, q),name=variable)`
Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q
...@@ -34,7 +34,7 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio ...@@ -34,7 +34,7 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio
**Prepare trial**: Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example: **Prepare trial**: Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example:
python ~/nni/examples/trials/mnist-annotation/mnist.py python3 ~/nni/examples/trials/mnist-annotation/mnist.py
This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial. This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial.
...@@ -89,12 +89,12 @@ You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command l ...@@ -89,12 +89,12 @@ You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command l
The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`. The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`.
## Further reading ## Further reading
* [How to write a trial running on NNI (Mnist as an example)?](WriteYourTrial.md) * [Overview](Overview.md)
* [Tutorial of NNI python annotation.](../tools/nni_annotation/README.md) * [Installation](InstallNNI_Ubuntu.md)
* [Tuners supported by NNI.](../src/sdk/pynni/nni/README.md) * [Use command line tool nnictl](NNICTLDOC.md)
* [How to enable early stop (i.e. assessor) in an experiment?](EnableAssessor.md) * [Use NNIBoard](WebUI.md)
* [How to run an experiment on multiple machines?](RemoteMachineMode.md) * [Define search space](SearchSpaceSpec.md)
* [How to write a customized tuner?](CustomizedTuner.md) * [Config an experiment](ExperimentConfig.md)
* [How to write a customized assessor?](../examples/assessors/README.md) * [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md)
* [How to resume an experiment?](NNICTLDOC.md) * [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md)
* [Tutorial of the command tool *nnictl*.](NNICTLDOC.md) * [How to run an experiment on OpenPAI?](PAIMode.md)
**How to Debug in NNI** **How to Debug in NNI**
=== ===
*Coming soon*
\ No newline at end of file
**Install NNI on Ubuntu**
===
## **Installation**
* __Dependencies__
python >= 3.5
git
wget
python pip should also be correctly installed. You could use "which pip" or "pip -V" to check in Linux.
* Note: we don't support virtual environment in current releases.
* __Install NNI through pip__
pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1
source ~/.bashrc
* __Install NNI through source code__
git clone -b v0.1 https://github.com/Microsoft/nni.git
cd nni
chmod +x install.sh
source install.sh
## Further reading
* [Overview](Overview.md)
* [Use command line tool nnictl](NNICTLDOC.md)
* [Use NNIBoard](WebUI.md)
* [Define search space](SearchSpaceSpec.md)
* [Config an experiment](ExperimentConfig.md)
* [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md)
* [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md)
# NNI Overview
NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments. For each experiment, user only need to define a search space and update a few lines of code, and then leverage NNI build-in algorithms and training services to search the best hyper parameters and/or neural architecture.
<p align="center">
<img src="./img/3_steps.jpg" alt="drawing"/>
</p>
After user submits the experiment through a command line tool [nnictl](../tools/README.md), a demon process (NNI manager) take care of search process. NNI manager continuously get search settings that generated by tuning algorithms, then NNI manager asks the training service component to dispatch and run trial jobs in a targeted training environment (e.g. local machine, remote servers and cloud). The results of trials jobs such as model accurate will send back to tuning algorithms for generating more meaningful search settings. NNI manager stops the search process after it find the best models.
## Architecture Overview
<p align="center">
<img src="./img/nni_arch_overview.png" alt="drawing"/>
</p>
User can use the nnictl and/or a visualized Web UI nniboard to monitor and debug a given experiment.
<p align="center">
<img src="./img/overview.jpg" alt="drawing"/>
</p>
NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment.
## Key Concepts
**Experiment** in NNI is a method for testing different assumptions (hypotheses) by Trials under conditions constructed and controlled by NNI. During the experiment, one or more conditions are allowed to change in an organized manner and effects of these changes on associated conditions.
### **Trial**
**Trial** in NNI is an individual attempt at applying a set of parameters on a model.
### **Tuner**
**Tuner** in NNI is an implementation of Tuner API for a special tuning algorithm. [Read more about the Tuners supported in the latest NNI release](../src/sdk/pynni/nni/README.md)
### **Assessor**
**Assessor** in NNI is an implementation of Assessor API for optimizing the execution of experiment.
## Learn More
* [Get started](GetStarted.md)
### **How to**
* [Installation](InstallNNI_Ubuntu.md)
* [Use command line tool nnictl](NNICTLDOC.md)
* [Use NNIBoard](WebUI.md)
* [Define search space](InstallNNI_Ubuntu.md)
* [Use NNI sdk] - *coming soon*
* [Config an experiment](SearchSpaceSpec.md)
* [Use annotation](AnnotationSpec.md)
* [Debug](HowToDebug.md)
### **Tutorials**
* [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md)
* [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](PAIMode.md)
* [Try different tuners and assessors] - *coming soon*
* [How to run an experiment on K8S services?] - *coming soon*
* [Implement a customized tuner] - *coming soon*
* [Implement a customized assessor] - *coming soon*
* [Implement a custmoized weight sharing algorithm] - *coming soon*
* [How to integrate NNI with your own custmoized training service] - *coming soon*
### **Best practice**
* [Compare different AutoML algorithms] - *coming soon*
* [Serve NNI as a capability of a ML Platform] - *coming soon*
**Tutorial: Create and Run an Experiment on local with NNI API**
===
In this tutorial, we will use the example in [~/examples/trials/mnist] to explain how to create and run an experiment on local with NNI API.
>Before starts
You have an implementation for MNIST classifer using convolutional layers, the Python code is in `mnist_before.py`.
>Step 1 - Update model codes
To enable NNI API, make the following changes:
~~~~
1.1 Declare NNI API
Include `import nni` in your trial code to use NNI APIs.
1.2 Get predefined parameters
Use the following code snippet:
RECEIVED_PARAMS = nni.get_parameters()
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example:
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
1.3 Report NNI results
Use the API:
`nni.report_intermediate_result(accuracy)`
to send `accuracy` to assessor.
Use the API:
`nni.report_final_result(accuracy)`
to send `accuracy` to tuner.
~~~~
We had made the changes and saved it to `mnist.py`.
**NOTE**:
~~~~
accuracy - The `accuracy` could be any python object, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int).
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial).
tuner - The tuner will generate next parameters/architecture based on the explore history (final result of all trials).
~~~~
>Step 2 - Define SearchSpace
The hyper-parameters used in `Step 1.2 - Get predefined parameters` is defined in a `search_space.json` file like below:
```
{
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
"conv_size":{"_type":"choice","_value":[2,3,5,7]},
"hidden_size":{"_type":"choice","_value":[124, 512, 1024]},
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
```
Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search space.
>Step 3 - Define Experiment
>>3.1 enable NNI API mode
To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1):
```
useAnnotation: false
searchSpacePath: /path/to/your/search_space.json
```
To run an experiment in NNI, you only needed:
* Provide a runnable trial
* Provide or choose a tuner
* Provide a yaml experiment configure file
* (optional) Provide or choose an assessor
**Prepare trial**:
>A set of examples can be found in ~/nni/examples after your installation, run `ls ~/nni/examples/trials` to see all the trial examples.
Let's use a simple trial example, e.g. mnist, provided by NNI. After you installed NNI, NNI examples have been put in ~/nni/examples, run `ls ~/nni/examples/trials` to see all the trial examples. You can simply execute the following command to run the NNI mnist example:
python ~/nni/examples/trials/mnist-annotation/mnist.py
This command will be filled in the yaml configure file below. Please refer to [here](howto_1_WriteTrial) for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
tuner:
builtinTunerName: TPE
classArgs:
optimize_mode: maximize
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here]()), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
```
authorName: your_name
experimentName: auto_mnist
# how many trials could be concurrently running
trialConcurrency: 2
# maximum experiment running duration
maxExecDuration: 3h
# empty means never stop
maxTrialNum: 100
# choice: local, remote
trainingServicePlatform: local
# choice: true, false
useAnnotation: true
tuner:
builtinTunerName: TPE
classArgs:
optimize_mode: maximize
trial:
command: python mnist.py
codeDir: ~/nni/examples/trials/mnist-annotation
gpuNum: 0
```
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../tools/annotation/README.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
With all these steps done, we can run the experiment with the following command:
nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml
You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command line tool.
## View experiment results
The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`.
\ No newline at end of file
**Tutorial: Run an experiment on multiple machines**
===
NNI supports running an experiment on multiple machines, called remote machine mode. Let's say you have multiple machines with the account `bob` (Note: the account is not necessarily the same on multiple machines):
| IP | Username| Password |
| -------- |---------|-------|
| 10.1.1.1 | bob | bob123 |
| 10.1.1.2 | bob | bob123 |
| 10.1.1.3 | bob | bob123 |
## Setup environment
Install NNI on each of your machines following the install guide [here](GetStarted.md).
For remote machines that are used only to run trials but not the nnictl, you can just install python SDK:
* __Install python SDK through pip__
python3 -m pip install --user git+https://github.com/Microsoft/NeuralNetworkIntelligence.git#subdirectory=src/sdk/pynni
* __Install python SDK through source code__
git clone https://github.com/Microsoft/NeuralNetworkIntelligence
cd src/sdk/pynni
python3 setup.py install
## Run an experiment
Still using `examples/trials/mnist-annotation` as an example here. The yaml file you need is shown below:
```
authorName: your_name
experimentName: auto_mnist
# how many trials could be concurrently running
trialConcurrency: 2
# maximum experiment running duration
maxExecDuration: 3h
# empty means never stop
maxTrialNum: 100
# choice: local, remote, pai
trainingServicePlatform: local
# choice: true, false
useAnnotation: true
tuner:
builtinTunerName: TPE
classArgs:
optimize_mode: maximize
trial:
command: python mnist.py
codeDir: /usr/share/nni/examples/trials/mnist-annotation
gpuNum: 0
#machineList can be empty if the platform is local
machineList:
- ip: 10.1.1.1
username: bob
passwd: bob123
- ip: 10.1.1.2
username: bob
passwd: bob123
- ip: 10.1.1.3
username: bob
passwd: bob123
```
Simply filling the `machineList` section. This yaml file is named `exp_remote.yaml`, then run:
```
nnictl create --config exp_remote.yaml
```
to start the experiment. This command can be executed on one of those three machines above, and can also be executed on another machine which has NNI installed and has network accessibility to those three machines.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment