Unverified Commit 41a9a598 authored by SparkSnail's avatar SparkSnail Committed by GitHub
Browse files

Merge pull request #137 from Microsoft/master

merge master
parents f09d51a7 33ad0f9d
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments. NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments.
The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud. The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.
### **NNI [v0.5.1](https://github.com/Microsoft/nni/releases) has been released!** ### **NNI [v0.5.2](https://github.com/Microsoft/nni/releases) has been released!**
<p align="center"> <p align="center">
<a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a> <a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a>
</p> </p>
...@@ -115,7 +115,7 @@ Note: ...@@ -115,7 +115,7 @@ Note:
* We support Linux (Ubuntu 16.04 or higher), MacOS (10.14.1) in our current stage. * We support Linux (Ubuntu 16.04 or higher), MacOS (10.14.1) in our current stage.
* Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`. * Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`.
```bash ```bash
git clone -b v0.5.1 https://github.com/Microsoft/nni.git git clone -b v0.5.2 https://github.com/Microsoft/nni.git
cd nni cd nni
source install.sh source install.sh
``` ```
...@@ -127,7 +127,7 @@ For the system requirements of NNI, please refer to [Install NNI](docs/en_US/Ins ...@@ -127,7 +127,7 @@ For the system requirements of NNI, please refer to [Install NNI](docs/en_US/Ins
The following example is an experiment built on TensorFlow. Make sure you have **TensorFlow installed** before running it. The following example is an experiment built on TensorFlow. Make sure you have **TensorFlow installed** before running it.
* Download the examples via clone the source code. * Download the examples via clone the source code.
```bash ```bash
git clone -b v0.5.1 https://github.com/Microsoft/nni.git git clone -b v0.5.2 https://github.com/Microsoft/nni.git
``` ```
* Run the mnist example. * Run the mnist example.
```bash ```bash
......
# NNI Annotation # NNI Annotation
## Overview ## Overview
To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code. To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code.
...@@ -32,7 +31,30 @@ In NNI, there are mainly four types of annotation: ...@@ -32,7 +31,30 @@ In NNI, there are mainly four types of annotation:
- **sampling_algo**: Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an `nni.` identification and a search space type specified in [SearchSpaceSpec](SearchSpaceSpec.md) such as `choice` or `uniform`. - **sampling_algo**: Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an `nni.` identification and a search space type specified in [SearchSpaceSpec](SearchSpaceSpec.md) such as `choice` or `uniform`.
- **name**: The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement. - **name**: The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.
An example here is: There are 10 types to express your search space as follows:
* `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)`
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
* `@nni.variable(nni.randint(upper),name=variable)`
Which means the variable value is a random integer in the range [0, upper).
* `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high.
* `@nni.variable(nni.quniform(low, high, q),name=variable)`
Which means the variable value is a value like round(uniform(low, high) / q) * q
* `@nni.variable(nni.loguniform(low, high),name=variable)`
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
* `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q
* `@nni.variable(nni.normal(mu, sigma),name=variable)`
Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma.
* `@nni.variable(nni.qnormal(mu, sigma, q),name=variable)`
Which means the variable value is a value like round(normal(mu, sigma) / q) * q
* `@nni.variable(nni.lognormal(mu, sigma),name=variable)`
Which means the variable value is a value drawn according to exp(normal(mu, sigma))
* `@nni.variable(nni.qlognormal(mu, sigma, q),name=variable)`
Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q
Below is an example:
```python ```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)''' '''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
...@@ -47,7 +69,7 @@ learning_rate = 0.1 ...@@ -47,7 +69,7 @@ learning_rate = 0.1
**Arguments** **Arguments**
- **\*functions**: Several functions that are waiting to be selected from. Note that it should be a complete function call with arguments. Such as `max_pool(hidden_layer, pool_size)`. - **functions**: Several functions that are waiting to be selected from. Note that it should be a complete function call with arguments. Such as `max_pool(hidden_layer, pool_size)`.
- **name**: The name of the function that will be replaced in the following assignment statement. - **name**: The name of the function that will be replaced in the following assignment statement.
An example here is: An example here is:
......
...@@ -15,7 +15,7 @@ Currently we only support installation on Linux & Mac. ...@@ -15,7 +15,7 @@ Currently we only support installation on Linux & Mac.
Prerequisite: `python >=3.5, git, wget` Prerequisite: `python >=3.5, git, wget`
```bash ```bash
git clone -b v0.5.1 https://github.com/Microsoft/nni.git git clone -b v0.5.2 https://github.com/Microsoft/nni.git
cd nni cd nni
./install.sh ./install.sh
``` ```
......
...@@ -47,17 +47,17 @@ All types of sampling strategies and their parameter are listed here: ...@@ -47,17 +47,17 @@ All types of sampling strategies and their parameter are listed here:
* Which means the variable value is a value like round(loguniform(low, high)) / q) * q * Which means the variable value is a value like round(loguniform(low, high)) / q) * q
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below. * Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
* {"_type":"normal","_value":[label, mu, sigma]} * {"_type":"normal","_value":[mu, sigma]}
* Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable. * Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.
* {"_type":"qnormal","_value":[label, mu, sigma, q]} * {"_type":"qnormal","_value":[mu, sigma, q]}
* Which means the variable value is a value like round(normal(mu, sigma) / q) * q * Which means the variable value is a value like round(normal(mu, sigma) / q) * q
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded. * Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
* {"_type":"lognormal","_value":[label, mu, sigma]} * {"_type":"lognormal","_value":[mu, sigma]}
* Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive. * Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
* {"_type":"qlognormal","_value":[label, mu, sigma, q]} * {"_type":"qlognormal","_value":[mu, sigma, q]}
* Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q * Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q
* Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side. * Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.
......
...@@ -196,7 +196,7 @@ class MetisTuner(Tuner): ...@@ -196,7 +196,7 @@ class MetisTuner(Tuner):
------- -------
result : dict result : dict
""" """
if self.samples_x or len(self.samples_x) < self.cold_start_num: if len(self.samples_x) < self.cold_start_num:
init_parameter = _rand_init(self.x_bounds, self.x_types, 1)[0] init_parameter = _rand_init(self.x_bounds, self.x_types, 1)[0]
results = self._pack_output(init_parameter) results = self._pack_output(init_parameter)
else: else:
...@@ -207,7 +207,7 @@ class MetisTuner(Tuner): ...@@ -207,7 +207,7 @@ class MetisTuner(Tuner):
minimize_starting_points=self.minimize_starting_points, minimize_starting_points=self.minimize_starting_points,
minimize_constraints_fun=self.minimize_constraints_fun) minimize_constraints_fun=self.minimize_constraints_fun)
logger.info("Generate paramageters:\n%s", str(results)) logger.info("Generate paramageters:\n" + str(results))
return results return results
...@@ -226,8 +226,8 @@ class MetisTuner(Tuner): ...@@ -226,8 +226,8 @@ class MetisTuner(Tuner):
value = -value value = -value
logger.info("Received trial result.") logger.info("Received trial result.")
logger.info("value is :\t%f", value) logger.info("value is :" + str(value))
logger.info("parameter is :\t%s", str(parameters)) logger.info("parameter is : " + str(parameters))
# parse parameter to sample_x # parse parameter to sample_x
sample_x = [0 for i in range(len(self.key_order))] sample_x = [0 for i in range(len(self.key_order))]
...@@ -340,7 +340,7 @@ class MetisTuner(Tuner): ...@@ -340,7 +340,7 @@ class MetisTuner(Tuner):
results_outliers = gp_outlier_detection.outlierDetection_threaded(samples_x, samples_y_aggregation) results_outliers = gp_outlier_detection.outlierDetection_threaded(samples_x, samples_y_aggregation)
if results_outliers is not None: if results_outliers is not None:
temp = len(candidates) #temp = len(candidates)
for results_outlier in results_outliers: for results_outlier in results_outliers:
if _num_past_samples(samples_x[results_outlier['samples_idx']], samples_x, samples_y) < max_resampling_per_x: if _num_past_samples(samples_x[results_outlier['samples_idx']], samples_x, samples_y) < max_resampling_per_x:
...@@ -370,12 +370,12 @@ class MetisTuner(Tuner): ...@@ -370,12 +370,12 @@ class MetisTuner(Tuner):
temp_improvement = threads_result['expected_lowest_mu'] - lm_current['expected_mu'] temp_improvement = threads_result['expected_lowest_mu'] - lm_current['expected_mu']
if next_improvement > temp_improvement: if next_improvement > temp_improvement:
logger.infor("DEBUG: \"next_candidate\" changed: \ # logger.info("DEBUG: \"next_candidate\" changed: \
lowest mu might reduce from %f (%s) to %f (%s), %s\n" %\ # lowest mu might reduce from %f (%s) to %f (%s), %s\n" %\
lm_current['expected_mu'], str(lm_current['hyperparameter']),\ # lm_current['expected_mu'], str(lm_current['hyperparameter']),\
threads_result['expected_lowest_mu'],\ # threads_result['expected_lowest_mu'],\
str(threads_result['candidate']['hyperparameter']),\ # str(threads_result['candidate']['hyperparameter']),\
threads_result['candidate']['reason']) # threads_result['candidate']['reason'])
next_improvement = temp_improvement next_improvement = temp_improvement
next_candidate = threads_result['candidate'] next_candidate = threads_result['candidate']
......
# NNI Annotation Introduction # NNI Annotation
For good user experience and reduce user effort, we need to design a good annotation grammar. ## Overview
If users use NNI system, they only need to: To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code.
1. Annotation variable in code as: Below is an example:
'''@nni.variable(nni.choice(2,3,5,7),name=self.conv_size)''' ```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
learning_rate = 0.1
```
The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line.
2. Annotation intermediate in code as:
'''@nni.report_intermediate_result(test_acc)''' In this way, users could either run the python code directly or launch NNI to tune hyper-parameter in this code, without changing any codes.
3. Annotation output in code as: ## Types of Annotation:
'''@nni.report_final_result(test_acc)''' In NNI, there are mainly four types of annotation:
4. Annotation `function_choice` in code as:
'''@nni.function_choice(max_pool(h_conv1, self.pool_size),avg_pool(h_conv1, self.pool_size),name=max_pool)''' ### 1. Annotate variables
In this way, they can easily implement automatic tuning on NNI. `'''@nni.variable(sampling_algo, name)'''`
For `@nni.variable`, `nni.choice` is the type of search space and there are 10 types to express your search space as follows: `@nni.variable` is used in NNI to annotate a variable.
1. `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)` **Arguments**
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
2. `@nni.variable(nni.randint(upper),name=variable)` - **sampling_algo**: Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an `nni.` identification and a search space type specified in [SearchSpaceSpec](https://nni.readthedocs.io/en/latest/SearchSpaceSpec.html) such as `choice` or `uniform`.
Which means the variable value is a random integer in the range [0, upper). - **name**: The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.
3. `@nni.variable(nni.uniform(low, high),name=variable)` There are 10 types to express your search space as follows:
Which means the variable value is a value uniformly between low and high.
4. `@nni.variable(nni.quniform(low, high, q),name=variable)` * `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)`
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
* `@nni.variable(nni.randint(upper),name=variable)`
Which means the variable value is a random integer in the range [0, upper).
* `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high.
* `@nni.variable(nni.quniform(low, high, q),name=variable)`
Which means the variable value is a value like round(uniform(low, high) / q) * q Which means the variable value is a value like round(uniform(low, high) / q) * q
* `@nni.variable(nni.loguniform(low, high),name=variable)`
5. `@nni.variable(nni.loguniform(low, high),name=variable)`
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed. Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
* `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
6. `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q
* `@nni.variable(nni.normal(mu, sigma),name=variable)`
7. `@nni.variable(nni.normal(label, mu, sigma),name=variable)`
Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma.
* `@nni.variable(nni.qnormal(mu, sigma, q),name=variable)`
8. `@nni.variable(nni.qnormal(label, mu, sigma, q),name=variable)`
Which means the variable value is a value like round(normal(mu, sigma) / q) * q Which means the variable value is a value like round(normal(mu, sigma) / q) * q
* `@nni.variable(nni.lognormal(mu, sigma),name=variable)`
9. `@nni.variable(nni.lognormal(label, mu, sigma),name=variable)`
Which means the variable value is a value drawn according to exp(normal(mu, sigma)) Which means the variable value is a value drawn according to exp(normal(mu, sigma))
* `@nni.variable(nni.qlognormal(mu, sigma, q),name=variable)`
10. `@nni.variable(nni.qlognormal(label, mu, sigma, q),name=variable)`
Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q
Below is an example:
```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
learning_rate = 0.1
```
### 2. Annotate functions
`'''@nni.function_choice(*functions, name)'''`
`@nni.function_choice` is used to choose one from several functions.
**Arguments**
- **functions**: Several functions that are waiting to be selected from. Note that it should be a complete function call with arguments. Such as `max_pool(hidden_layer, pool_size)`.
- **name**: The name of the function that will be replaced in the following assignment statement.
An example here is:
```python
"""@nni.function_choice(max_pool(hidden_layer, pool_size), avg_pool(hidden_layer, pool_size), name=max_pool)"""
h_pooling = max_pool(hidden_layer, pool_size)
```
### 3. Annotate intermediate result
`'''@nni.report_intermediate_result(metrics)'''`
`@nni.report_intermediate_result` is used to report intermediate result, whose usage is the same as `nni.report_intermediate_result` in [Trials.md](https://nni.readthedocs.io/en/latest/Trials.html)
### 4. Annotate final result
`'''@nni.report_final_result(metrics)'''`
`@nni.report_final_result` is used to report the final result of the current trial, whose usage is the same as `nni.report_final_result` in [Trials.md](https://nni.readthedocs.io/en/latest/Trials.html)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment