Unverified Commit a8158456 authored by QuanluZhang's avatar QuanluZhang Committed by GitHub
Browse files

update doc (#52)

* update doc

* update doc

* update doc

* update hyperopt installation

* update doc

* update doc

* update description in setup.py

* update setup.py

* modify encoding

* encoding

* add encoding

* remove pymc3

* update doc
parent 43e64c35
......@@ -2,8 +2,8 @@
[![Build Status](https://travis-ci.org/Microsoft/nni.svg?branch=master)](https://travis-ci.org/Microsoft/nni)
NNI (Neural Network Intelligence) is a toolkit to help users running automated machine learning experiments.
The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters at different environments (e.g. local, remote servers and cloud).
NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments.
The tool dispatches and runs trial jobs that generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments (e.g. local machine, remote servers and cloud).
```
AutoML experiment Training Services
......@@ -29,8 +29,8 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s
# Getting Started with NNI
## **Installation**
Install through python pip
* requirements: python >= 3.5
Install through python pip. (the current version only supports linux, nni on ubuntu 16.04 or newer has been well tested)
* requirements: python >= 3.5, git, wget
```
pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1
source ~/.bashrc
......@@ -40,6 +40,7 @@ source ~/.bashrc
## **Quick start: run an experiment at local**
Requirements:
* NNI installed on your local machine
* tensorflow installed
Run the following command to create an experiment for [mnist]
```bash
......
# Customized Tuner for Experts
*Tuner receive result from Trial as a matric to evaluate the performance of a specific parameters/architecture configure. And tuner send next hyper-parameter or architecture configure to Trial.*
So, if user want to implement a customized Tuner, she/he only need to:
1) Inherit a tuner of a base Tuner class
2) Implement receive_trial_result and generate_parameter function
3) Write a script to run Tuner
Here ia an example:
**1) Inherit a tuner of a base Tuner class**
```python
from nni.tuner import Tuner
class CustomizedTuner(Tuner):
def __init__(self, ...):
...
```
**2) Implement receive_trial_result and generate_parameter function**
```python
from nni.tuner import Tuner
class CustomizedTuner(Tuner):
def __init__(self, ...):
...
def receive_trial_result(self, parameter_id, parameters, reward):
'''
Record an observation of the objective function and Train
parameter_id: int
parameters: object created by 'generate_parameters()'
reward: object reported by trial
'''
# your code implements here.
...
def generate_parameters(self, parameter_id):
'''
Returns a set of trial (hyper-)parameters, as a serializable object
parameter_id: int
'''
# your code implements here.
return your_parameters
...
```
```receive_trial_result``` will receive ```the parameter_id, parameters, reward``` as parameters input. Also, Tuner will receive the ```reward``` object are exactly same reward that Trial send.
The ```your_parameters``` return from ```generate_parameters``` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same ```your_parameters``` from Tuner.
For example:
If the you implement the ```generate_parameters``` like this:
```python
def generate_parameters(self, parameter_id):
'''
Returns a set of trial (hyper-)parameters, as a serializable object
parameter_id: int
'''
# your code implements here.
return {"dropout": 0.3, "learning_rate": 0.4}
```
It's means your Tuner will always generate parameters ```{"dropout": 0.3, "learning_rate": 0.4}```. Then Trial will receive ```{"dropout": 0.3, "learning_rate": 0.4}``` this object will using ```nni.get_parameters()``` API from NNI SDK. After training of Trial, it will send result to Tuner by calling ```nni.report_final_result(0.93)```. Then ```receive_trial_result``` will function will receied these parameters like:
```
parameter_id = 82347
parameters = {"dropout": 0.3, "learning_rate": 0.4}
reward = 0.93
```
**3) Configure your customized tuner in experiment yaml config file**
NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor.
```yaml
tuner:
codeDir: /home/abc/mytuner
classFileName: my_customized_tuner.py
className: CustomizedTuner
# Any parameter need to pass to your tuner class __init__ constructor
# can be specified in this optional classArgs field, for example
classArgs:
arg1: value1
```
More detail example you could see:
> * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner)
# Customized Tuner for Experts
*Tuner receive result from Trial as a matric to evaluate the performance of a specific parameters/architecture configure. And tuner send next hyper-parameter or architecture configure to Trial.*
So, if user want to implement a customized Tuner, she/he only need to:
1) Inherit a tuner of a base Tuner class
2) Implement receive_trial_result and generate_parameter function
3) Write a script to run Tuner
Here ia an example:
**1) Inherit a tuner of a base Tuner class**
```python
from nni.tuner import Tuner
class CustomizedTuner(Tuner):
def __init__(self, ...):
...
```
**2) Implement receive_trial_result and generate_parameter function**
```python
from nni.tuner import Tuner
class CustomizedTuner(Tuner):
def __init__(self, ...):
...
def receive_trial_result(self, parameter_id, parameters, reward):
'''
Record an observation of the objective function and Train
parameter_id: int
parameters: object created by 'generate_parameters()'
reward: object reported by trial
'''
# your code implements here.
...
def generate_parameters(self, parameter_id):
'''
Returns a set of trial (hyper-)parameters, as a serializable object
parameter_id: int
'''
# your code implements here.
return your_parameters
...
```
```receive_trial_result``` will receive ```the parameter_id, parameters, reward``` as parameters input. Also, Tuner will receive the ```reward``` object are exactly same reward that Trial send.
The ```your_parameters``` return from ```generate_parameters``` function, will be package as json object by NNI SDK. NNI SDK will unpack json object so the Trial will receive the exact same ```your_parameters``` from Tuner.
For example:
If the you implement the ```generate_parameters``` like this:
```python
def generate_parameters(self, parameter_id):
'''
Returns a set of trial (hyper-)parameters, as a serializable object
parameter_id: int
'''
# your code implements here.
return {"dropout": 0.3, "learning_rate": 0.4}
```
It's means your Tuner will always generate parameters ```{"dropout": 0.3, "learning_rate": 0.4}```. Then Trial will receive ```{"dropout": 0.3, "learning_rate": 0.4}``` this object will using ```nni.get_parameters()``` API from NNI SDK. After training of Trial, it will send result to Tuner by calling ```nni.report_final_result(0.93)```. Then ```receive_trial_result``` will function will receied these parameters like:
```
parameter_id = 82347
parameters = {"dropout": 0.3, "learning_rate": 0.4}
reward = 0.93
```
**Note that** if you want to access a file (e.g., ```data.txt```) in the directory of your own tuner, you cannot use ```open('data.txt', 'r')```. Instead, you should use the following:
```
_pwd = os.path.dirname(__file__)
_fd = open(os.path.join(_pwd, 'data.txt'), 'r')
```
This is because your tuner is not executed in the directory of your tuner (i.e., ```pwd``` is not the directory of your own tuner).
**3) Configure your customized tuner in experiment yaml config file**
NNI needs to locate your customized tuner class and instantiate the class, so you need to specify the location of the customized tuner class and pass literal values as parameters to the \_\_init__ constructor.
```yaml
tuner:
codeDir: /home/abc/mytuner
classFileName: my_customized_tuner.py
className: CustomizedTuner
# Any parameter need to pass to your tuner class __init__ constructor
# can be specified in this optional classArgs field, for example
classArgs:
arg1: value1
```
More detail example you could see:
> * [evolution-tuner](../src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](../src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](../examples/tuners/ga_customer_tuner)
......@@ -70,3 +70,10 @@ trial:
gpuNum: 0
```
You need to fill: `codeDir`, `classFileName`, `className`, and pass parameters to \_\_init__ constructor through `classArgs` field if the \_\_init__ constructor of your assessor class has required parameters.
**Note that** if you want to access a file (e.g., ```data.txt```) in the directory of your own assessor, you cannot use ```open('data.txt', 'r')```. Instead, you should use the following:
```
_pwd = os.path.dirname(__file__)
_fd = open(os.path.join(_pwd, 'data.txt'), 'r')
```
This is because your assessor is not executed in the directory of your assessor (i.e., ```pwd``` is not the directory of your own assessor).
\ No newline at end of file
......@@ -12,12 +12,12 @@
* __Install NNI through pip__
pip3 install -v --user git+https://github.com/Microsoft/nni.git
pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1
source ~/.bashrc
* __Install NNI through source code__
git clone https://github.com/Microsoft/nni.git
git clone -b v0.1 https://github.com/Microsoft/nni.git
cd nni
chmod +x install.sh
source install.sh
......@@ -37,12 +37,14 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio
This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial.
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here]()), but for simplicity, here we choose a tuner provided by NNI as below:
**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizedTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below:
tunerName: TPE
optimizationMode: maximize
tuner:
builtinTunerName: TPE
classArgs:
optimize_mode: maximize
*tunerName* is used to specify a tuner in NNI, *optimizationMode* is to indicate whether you want to maximize or minimize your trial's result.
*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here]()), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result.
**Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below:
......@@ -74,7 +76,7 @@ trial:
gpuNum: 0
```
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here]() for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
Here *useAnnotation* is true because this trial example uses our python annotation (refer to [here](../tools/annotation/README.md) for details). For trial, we should provide *trialCommand* which is the command to run the trial, provide *trialCodeDir* where the trial code is. The command will be executed in this directory. We should also provide how many GPUs a trial requires.
With all these steps done, we can run the experiment with the following command:
......
nnictl
===
## Introduction
__nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc.
## Commands
nnictl support commands:
```
nnictl create
nnictl stop
nnictl update
nnictl resume
nnictl trial
nnictl webui
nnictl experiment
nnictl config
nnictl log
```
### Manage an experiment
* __nnictl create__
* Description
You can use this command to create a new experiment, using the configuration specified in config file.
After this command is successfully done, the context will be set as this experiment,
which means the following command you issued is associated with this experiment,
unless you explicitly changes the context(not supported yet).
* Usage
nnictl create [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --config, -c| True| |yaml configure file of the experiment|
| --webuiport, -w| False| 8080|assign a port for webui|
* __nnictl resume__
* Description
You can use this command to resume a stopped experiment.
* Usage
nnictl resume [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --experiment, -e| False| |ID of the experiment you want to resume|
* __nnictl stop__
* Description
You can use this command to stop a running experiment.
* Usage
nnictl stop
* __nnictl update__
* __nnictl update searchspace__
* Description
You can use this command to update an experiment's search space.
* Usage
nnictl update searchspace [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --filename, -f| True| |the file storing your new search space|
* __nnictl update concurrency__
* Description
You can use this command to update an experiment's concurrency.
* Usage
nnictl update concurrency [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --value, -v| True| |the number of allowed concurrent trials|
* __nnictl update duration__
* Description
You can use this command to update an experiment's concurrency.
* Usage
nnictl update duration [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --value, -v| True| |the experiment duration will be NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.|
* __nnictl trial__
* __nnictl trial ls__
* Description
You can use this command to show trial's information.
* Usage
nnictl trial ls
* __nnictl trial kill__
* Description
You can use this command to kill a trial job.
* Usage
nnictl trial kill [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --trialid, -t| True| |ID of the trial you want to kill.|
### Manage WebUI
* __nnictl webui start__
* Description
Start web ui function for nni, and will get a url list, you can open any of the url to see nni web page.
* Usage
nnictl webui start [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --port, -p| False| 8080|assign a port for webui|
* __nnictl webui stop__
* Description
Stop web ui function, and release url occupied. If you want to start again, use 'nnictl start webui' command
* Usage
nnictl webui stop
* __nnictl webui url__
* Description
Show the urls of web ui.
* Usage
nnictl webui url
### Manage experiment information
* __nnictl experiment show__
* Description
Show the information of experiment.
* Usage
nnictl experiment show
* __nnictl config show__
* Description
Display the current context information.
* Usage
nnictl config show
### Manage log
* __nnictl log stdout__
* Description
Show the stdout log content.
* Usage
nnictl log stdout [options]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --head, -h| False| |show head lines of stdout|
| --tail, -t| False| |show tail lines of stdout|
| --path, -p| False| |show the path of stdout file|
* __nnictl log stderr__
* Description
Show the stderr log content.
* Usage
nnictl log stderr [options]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --head, -h| False| |show head lines of stderr|
| --tail, -t| False| |show tail lines of stderr|
| --path, -p| False| |show the path of stderr file|
nnictl
===
## Introduction
__nnictl__ is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc.
## Commands
nnictl support commands:
```
nnictl create
nnictl stop
nnictl update
nnictl resume
nnictl trial
nnictl webui
nnictl experiment
nnictl config
nnictl log
```
### Manage an experiment
* __nnictl create__
* Description
You can use this command to create a new experiment, using the configuration specified in config file.
After this command is successfully done, the context will be set as this experiment,
which means the following command you issued is associated with this experiment,
unless you explicitly changes the context(not supported yet).
* Usage
nnictl create [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --config, -c| True| |yaml configure file of the experiment|
| --webuiport, -w| False| 8080|assign a port for webui|
* __nnictl resume__
* Description
You can use this command to resume a stopped experiment.
* Usage
nnictl resume [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --experiment, -e| False| |ID of the experiment you want to resume|
* __nnictl stop__
* Description
You can use this command to stop a running experiment.
* Usage
nnictl stop
* __nnictl update__
* __nnictl update searchspace__
* Description
You can use this command to update an experiment's search space.
* Usage
nnictl update searchspace [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --filename, -f| True| |the file storing your new search space|
* __nnictl update concurrency__
* Description
You can use this command to update an experiment's concurrency.
* Usage
nnictl update concurrency [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --value, -v| True| |the number of allowed concurrent trials|
* __nnictl update duration__
* Description
You can use this command to update an experiment's concurrency.
* Usage
nnictl update duration [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --value, -v| True| |the experiment duration will be NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.|
* __nnictl trial__
* __nnictl trial ls__
* Description
You can use this command to show trial's information.
* Usage
nnictl trial ls
* __nnictl trial kill__
* Description
You can use this command to kill a trial job.
* Usage
nnictl trial kill [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --trialid, -t| True| |ID of the trial you want to kill.|
### Manage WebUI
* __nnictl webui start__
* Description
Start web ui function for nni, and will get a url list, you can open any of the url to see nni web page.
* Usage
nnictl webui start [OPTIONS]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --port, -p| False| 8080|assign a port for webui|
* __nnictl webui stop__
* Description
Stop web ui function, and release url occupied. If you want to start again, use 'nnictl start webui' command
* Usage
nnictl webui stop
* __nnictl webui url__
* Description
Show the urls of web ui.
* Usage
nnictl webui url
### Manage experiment information
* __nnictl experiment show__
* Description
Show the information of experiment.
* Usage
nnictl experiment show
* __nnictl config show__
* Description
Display the current context information.
* Usage
nnictl config show
### Manage log
* __nnictl log stdout__
* Description
Show the stdout log content.
* Usage
nnictl log stdout [options]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --head, -h| False| |show head lines of stdout|
| --tail, -t| False| |show tail lines of stdout|
| --path, -p| False| |show the path of stdout file|
* __nnictl log stderr__
* Description
Show the stderr log content.
* Usage
nnictl log stderr [options]
Options:
| Name, shorthand | Required|Default | Description |
| ------ | ------ | ------ |------ |
| --head, -h| False| |show head lines of stderr|
| --tail, -t| False| |show tail lines of stderr|
| --path, -p| False| |show the path of stderr file|
\ No newline at end of file
......@@ -26,7 +26,7 @@ The candidate type and value for variable is here:
<br/>
* {"_type":"randint","_value":[upper]}
* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal.
* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now.
<br/>
* {"_type":"uniform","_value":[low, high]}
......@@ -36,7 +36,7 @@ The candidate type and value for variable is here:
* {"_type":"quniform","_value":[low, high, q]}
* Which means the variable value is a value like round(uniform(low, high) / q) * q
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below.
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose integer from a range [low, high], you can write `_value` like this: `[low, high, 1]`.
<br/>
* {"_type":"loguniform","_value":[low, high]}
......
......@@ -25,7 +25,7 @@ from setuptools.command.install import install
from subprocess import Popen
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
return open(os.path.join(os.path.dirname(__file__), fname), encoding='utf-8').read()
class CustomInstallCommand(install):
'''a customized install class in pip module'''
......@@ -61,7 +61,7 @@ setup(
author = 'Microsoft NNI Team',
author_email = 'nni@microsoft.com',
description = 'Neural Network Intelligence project',
long_description = read('docs/NNICTLDOC.md'),
long_description = read('README.md'),
license = 'MIT',
url = 'https://github.com/Microsoft/nni',
......@@ -74,17 +74,13 @@ setup(
python_requires = '>=3.5',
install_requires = [
'astor',
'hyperopt',
'json_tricks',
'numpy',
'psutil',
'pymc3',
'pyyaml',
'requests',
'scipy'
],
dependency_links = [
'git+https://github.com/hyperopt/hyperopt.git',
],
cmdclass={
......
......@@ -3,8 +3,5 @@ json_tricks
# hyperopt tuner
numpy
git+https://github.com/hyperopt/hyperopt.git#egg=hyperopt
# darkopt assessor
scipy
pymc3
hyperopt
\ No newline at end of file
......@@ -23,7 +23,7 @@ import os
import setuptools
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
return open(os.path.join(os.path.dirname(__file__), fname), encoding='utf-8').read()
setuptools.setup(
name = 'nni',
......@@ -32,14 +32,11 @@ setuptools.setup(
python_requires = '>=3.5',
install_requires = [
'hyperopt',
'json_tricks',
'numpy',
'pymc3',
'scipy',
],
dependency_links = [
'git+https://github.com/hyperopt/hyperopt.git',
],
test_suite = 'tests',
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment