Unverified Commit 45c6508e authored by Chi Song's avatar Chi Song Committed by GitHub
Browse files

fix format of doc, change nni to NNI, yaml to yml. (#660)

fix indents of doc,
change nni to NNI
yaml to yml(file) and YAML(doc)
parent bc9eab33
......@@ -84,7 +84,7 @@ nnictl create --config ~/nni/examples/trials/ga_squad/config.yml
Due to the memory limitation of upload, we only upload the source code and complete the data download and training on OpenPAI. This experiment requires sufficient memory that `memoryMB >= 32G`, and the training may last for several hours.
### 3.1 Update configuration
Modify `nni/examples/trials/ga_squad/config_pai.yaml`, here is the default configuration:
Modify `nni/examples/trials/ga_squad/config_pai.yml`, here is the default configuration:
```
authorName: default
......
How to start an experiment
===
## 1.Introduce
There are few steps to start an new experiment of nni, here are the process.
There are few steps to start an new experiment of NNI, here are the process.
<img src="./img/experiment_process.jpg" width="50%" height="50%" />
......@@ -9,17 +9,17 @@ There are few steps to start an new experiment of nni, here are the process.
### 2.1 Check environment
1. Check if there is an old experiment running
2. Check if the port of restfurl server is free.
3. Validate the content of config yaml file.
3. Validate the content of config YAML file.
4. Prepare a config file to to record the information of this experiment.
### 2.2 Start restful server
Start an restful server process to manage nni experiment, the default port is 8080.
Start an restful server process to manage NNI experiment, the default port is 8080.
### 2.3 Check restful server
Check whether restful server process is successfully started and could get a response when send message to restful server.
### 2.4 Set experiment config
Call restful server to set experiment config before starting an experiment, experiment config includes the config values in config yaml file.
Call restful server to set experiment config before starting an experiment, experiment config includes the config values in config YAML file.
### 2.5 Check experiment cofig
Check the response content of restful server, if the status code of response is 200, the config is successfully set.
......
......@@ -120,7 +120,7 @@ For more information about annotation syntax and its usage, please refer to [Ann
### Step 2 - Enable NNI Annotation
In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation:
In the YAML configure file, you need to set *useAnnotation* to true to enable NNI annotation:
```
useAnnotation: true
```
......
......@@ -120,11 +120,10 @@ with tf.Session() as sess:
>>
>>Please refer to [Annotation README](../tools/nni_annotation/README.md) for more information about annotation syntax and its usage.
>Step 2 - Enable NNI Annotation
In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation:
In the YAML configure file, you need to set *useAnnotation* to true to enable NNI annotation:
```yaml
```yml
useAnnotation: true
```
......
......@@ -6,7 +6,7 @@ So, if user want to implement a customized Tuner, she/he only need to:
1. Inherit a tuner of a base Tuner class
1. Implement receive_trial_result and generate_parameter function
1. Configure your customized tuner in experiment yaml config file
1. Configure your customized tuner in experiment YAML config file
Here is an example:
......@@ -83,11 +83,11 @@ _fd = open(os.path.join(_pwd, 'data.txt'), 'r')
This is because your tuner is not executed in the directory of your tuner (i.e., `pwd` is not the directory of your own tuner).
**3) Configure your customized tuner in experiment yaml config file**
**3) Configure your customized tuner in experiment YAML config file**
NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor.
```yaml
```yml
tuner:
codeDir: /home/abc/mytuner
classFileName: my_customized_tuner.py
......
# **How To** - Customize Your Own Advisor
*Advisor targets the scenario that the automl algorithm wants the methods of both tuner and assessor. Advisor is similar to tuner on that it receives trial parameters request, final results, and generate trial parameters. Also, it is similar to assessor on that it receives intermediate results, trial's end state, and could send trial kill command. Note that, if you use Advisor, tuner and assessor are not allowed to be used at the same time.*
So, if user want to implement a customized Advisor, she/he only need to:
1. Define an Advisor inheriting from the MsgDispatcherBase class
1. Implement the methods with prefix `handle_` except `handle_request`
1. Configure your customized Advisor in experiment YAML config file
Here is an example:
**1) Define an Advisor inheriting from the MsgDispatcherBase class**
```python
from nni.msg_dispatcher_base import MsgDispatcherBase
class CustomizedAdvisor(MsgDispatcherBase):
def __init__(self, ...):
...
```
**2) Implement the methods with prefix `handle_` except `handle_request`**
Please refer to the implementation of Hyperband ([src/sdk/pynni/nni/hyperband_advisor/hyperband_advisor.py](../src/sdk/pynni/nni/hyperband_advisor/hyperband_advisor.py)) for how to implement the methods.
**3) Configure your customized Advisor in experiment YAML config file**
Similar to tuner and assessor. NNI needs to locate your customized Advisor class and instantiate the class, so you need to specify the location of the customized Advisor class and pass literal values as parameters to the \_\_init__ constructor.
```yml
advisor:
codeDir: /home/abc/myadvisor
classFileName: my_customized_advisor.py
className: CustomizedAdvisor
# Any parameter need to pass to your advisor class __init__ constructor
# can be specified in this optional classArgs field, for example
classArgs:
arg1: value1
```
......@@ -14,7 +14,7 @@ CNN MNIST classifier for deep learning is similar to `hello world` for programmi
<a name="mnist"></a>
**MNIST with NNI API**
This is a simple network which has two convolutional layers, two pooling layers and a fully connected layer. We tune hyperparameters, such as dropout rate, convolution size, hidden size, etc. It can be tuned with most NNI built-in tuners, such as TPE, SMAC, Random. We also provide an exmaple yaml file which enables assessor.
This is a simple network which has two convolutional layers, two pooling layers and a fully connected layer. We tune hyperparameters, such as dropout rate, convolution size, hidden size, etc. It can be tuned with most NNI built-in tuners, such as TPE, SMAC, Random. We also provide an exmaple YAML file which enables assessor.
`code directory: examples/trials/mnist/`
......
## Create multi-phase experiment
Typically each trial job gets single set of configuration (e.g. hyper parameters) from tuner and do some kind of experiment, let's say train a model with that hyper parameter and reports its result to tuner. Sometimes you may want to train multiple models within one trial job to share information between models or saving system resource by creating less trial jobs, for example:
1. Train multiple models sequentially in one trial job, so that later models can leverage the weights or other information of prior models and may use different hyper parameters.
2. Train large amount of models on limited system resource, combine multiple models together to save system resource to create large amount of trial jobs.
3. Any other scenario that you would like to train multiple models with different hyper parameters in one trial job, be aware that if you allocate multiple GPUs to a trial job and you train multiple models concurrently within on trial job, you need to allocate GPU resource properly by your trial code.
......@@ -13,25 +14,24 @@ To use multi-phase experiment, please follow below steps:
1. Implement nni.multi_phase.MultiPhaseTuner. For example, this [ENAS tuner](https://github.com/countif/enas_nni/blob/master/nni/examples/tuners/enas/nni_controller_ptb.py) is a multi-phase Tuner which implements nni.multi_phase.MultiPhaseTuner. While implementing your MultiPhaseTuner, you may want to use the trial_job_id parameter of generate_parameters method to generate hyper parameters for each trial job.
2. Set ```multiPhase``` field to ```true```, and configure your tuner implemented in step 1 as customized tuner in configuration file, for example:
1. Set `multiPhase` field to `true`, and configure your tuner implemented in step 1 as customized tuner in configuration file, for example:
```yml
...
multiPhase: true
tuner:
```yml
...
multiPhase: true
tuner:
codeDir: tuners/enas
classFileName: nni_controller_ptb.py
className: ENASTuner
classArgs:
say_hello: "hello"
...
```
...
```
3. Invoke nni.get_next_parameter() API for multiple times as needed in a trial, for example:
1. Invoke nni.get_next_parameter() API for multiple times as needed in a trial, for example:
```python
for i in range(5):
```python
for i in range(5):
# get parameter from tuner
tuner_param = nni.get_next_parameter()
......@@ -40,4 +40,4 @@ for i in range(5):
# report final result somewhere for the parameter retrieved above
nni.report_final_result()
# ...
```
```
......@@ -73,7 +73,7 @@ To run an experiment in NNI, you only needed:
* Provide a runnable trial
* Provide or choose a tuner
* Provide a yaml experiment configure file
* Provide a YAML experiment configure file
* (optional) Provide or choose an assessor
**Prepare trial**:
......@@ -83,7 +83,7 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install
python ~/nni/examples/trials/mnist-annotation/mnist.py
This command will be filled in the yaml configure file below. Please refer to [here](./howto_1_WriteTrial.md) for how to write your own trial.
This command will be filled in the YAML configure file below. Please refer to [here](./howto_1_WriteTrial.md) for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](./howto_2_CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
......@@ -94,7 +94,7 @@ This command will be filled in the yaml configure file below. Please refer to [h
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here]()), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the YAML configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
```
authorName: your_name
......
......@@ -3,7 +3,7 @@
NNI provides an easy to adopt approach to set up parameter tuning algorithms as well as early stop policies, we call them **Tuners** and **Assessors**.
**Tuner** specifies the algorithm you use to generate hyperparameter sets for each trial. In NNI, we support two approaches to set the tuner.
1. Directly use tuner provided by nni sdk
1. Directly use tuner provided by NNI sdk
required fields: builtinTunerName and classArgs.
......@@ -18,7 +18,7 @@ NNI provides an easy to adopt approach to set up parameter tuning algorithms as
**Assessor** specifies the algorithm you use to apply early stop policy. In NNI, there are two approaches to set the assessor.
1. Directly use assessor provided by nni sdk
1. Directly use assessor provided by NNI sdk
required fields: builtinAssessorName and classArgs.
......
......@@ -88,7 +88,7 @@ nnictl create --config ~/nni/examples/trials/ga_squad/config.yml
Due to the memory limitation of upload, we only upload the source code and complete the data download and training on OpenPAI. This experiment requires sufficient memory that `memoryMB >= 32G`, and the training may last for several hours.
### Update configuration
Modify `nni/examples/trials/ga_squad/config_pai.yaml`, here is the default configuration:
Modify `nni/examples/trials/ga_squad/config_pai.yml`, here is the default configuration:
```
authorName: default
......@@ -114,11 +114,11 @@ trial:
gpuNum: 0
cpuNum: 1
memoryMB: 32869
#The docker image to run nni job on pai
#The docker image to run NNI job on pai
image: msranni/nni:latest
#The hdfs directory to store data on pai, format 'hdfs://host:port/directory'
dataDir: hdfs://10.10.10.10:9000/username/nni
#The hdfs directory to store output data generated by nni, format 'hdfs://host:port/directory'
#The hdfs directory to store output data generated by NNI, format 'hdfs://host:port/directory'
outputDir: hdfs://10.10.10.10:9000/username/nni
paiConfig:
#The username to login pai
......
......@@ -18,9 +18,9 @@ pip install -r requirements.txt
### 3. Update configuration
Modify `examples/trials/network_morphism/cifar10/config.yaml` to fit your own task, note that searchSpacePath is not required in our configuration. Here is the default configuration:
Modify `examples/trials/network_morphism/cifar10/config.yml` to fit your own task, note that searchSpacePath is not required in our configuration. Here is the default configuration:
```yaml
```yml
authorName: default
experimentName: example_cifar10-network-morphism
trialConcurrency: 1
......@@ -79,16 +79,16 @@ net = build_graph_from_json(RCV_CONFIG)
# training procedure
# ....
# report the final accuracy to nni
# report the final accuracy to NNI
nni.report_final_result(best_acc)
```
### 5. Submit this job
```bash
# You can use nni command tool "nnictl" to create the a job which submit to the nni
# finally you successfully commit a Network Morphism Job to nni
nnictl create --config config.yaml
# You can use NNI command tool "nnictl" to create the a job which submit to the NNI
# finally you successfully commit a Network Morphism Job to NNI
nnictl create --config config.yml
```
## Trial Examples
......@@ -99,10 +99,10 @@ The trial has some examples which can guide you which located in `examples/trial
`Fashion-MNIST` is a dataset of [Zalando](https://jobs.zalando.com/tech/)'s article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. It is a modern image classification dataset widely used to replacing MNIST as a baseline dataset, because the dataset MNIST is too easy and overused.
There are two examples, [FashionMNIST-keras.py](./FashionMNIST/FashionMNIST_keras.py) and [FashionMNIST-pytorch.py](./FashionMNIST/FashionMNIST_pytorch.py). Attention, you should change the `input_width` to 28 and `input_channel` to 1 in `config.yaml ` for this dataset.
There are two examples, [FashionMNIST-keras.py](./FashionMNIST/FashionMNIST_keras.py) and [FashionMNIST-pytorch.py](./FashionMNIST/FashionMNIST_pytorch.py). Attention, you should change the `input_width` to 28 and `input_channel` to 1 in `config.yml` for this dataset.
### Cifar10
The `CIFAR-10` dataset [Canadian Institute For Advanced Research](https://www.cifar.ca/) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes.
There are two examples, [cifar10-keras.py](./cifar10/cifar10_keras.py) and [cifar10-pytorch.py](./cifar10/cifar10_pytorch.py). The value `input_width` is 32 and the value `input_channel` is 3 in `config.yaml ` for this dataset.
There are two examples, [cifar10-keras.py](./cifar10/cifar10_keras.py) and [cifar10-pytorch.py](./cifar10/cifar10_pytorch.py). The value `input_width` is 32 and the value `input_channel` is 3 in `config.yml` for this dataset.
**Run ENAS in NNI**
===
Now we have an enas example [enas-nni](https://github.com/countif/enas_nni) run in nni from our contributors.
Now we have an enas example [enas-nni](https://github.com/countif/enas_nni) run in NNI from our contributors.
Thanks our lovely contributors.
And welcome more and more people to join us!
......@@ -37,7 +37,7 @@ The figure below is the result of our algorithm on MNIST trial history data, whe
</p>
## 2. Usage
To use Curve Fitting Assessor, you should add the following spec in your experiment's yaml config file:
To use Curve Fitting Assessor, you should add the following spec in your experiment's YAML config file:
```
assessor:
......
Hyperband on nni
Hyperband on NNI
===
## 1. Introduction
......@@ -10,7 +10,7 @@ Frist, this is an example of how to write an automl algorithm based on MsgDispat
Second, this implementation fully leverages Hyperband's internal parallelism. More specifically, the next bucket is not started strictly after the current bucket, instead, it starts when there is available resource.
## 3. Usage
To use Hyperband, you should add the following spec in your experiment's yaml config file:
To use Hyperband, you should add the following spec in your experiment's yml config file:
```
advisor:
......
......@@ -10,7 +10,7 @@ If you want to know about network morphism trial usage, please check [Readme.md]
To use Network Morphism, you should modify the following spec in your `config.yml` file:
```yaml
```yml
tuner:
#choice: NetworkMorphism
builtinTunerName: NetworkMorphism
......@@ -50,7 +50,7 @@ net = build_graph_from_json(RCV_CONFIG)
# training procedure
# ....
# report the final accuracy to nni
# report the final accuracy to NNI
nni.report_final_result(best_acc)
```
......
# Integration doc: SMAC on nni
\ No newline at end of file
# Integration doc: SMAC on NNI
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment