"src/lib/vscode:/vscode.git/clone" did not exist on "95295061d0dbe09d0258516cdd780d0d64edb951"
Unverified Commit c7ca4510 authored by SparkSnail's avatar SparkSnail Committed by GitHub
Browse files

Merge pull request #171 from microsoft/master

merge master
parents a030505a af89df8c
...@@ -6,7 +6,7 @@ In NNI, tuner will sample parameters/architecture according to the search space, ...@@ -6,7 +6,7 @@ In NNI, tuner will sample parameters/architecture according to the search space,
To define a search space, users should define the name of variable, the type of sampling strategy and its parameters. To define a search space, users should define the name of variable, the type of sampling strategy and its parameters.
* A example of search space definition as follow: * An example of search space definition as follow:
```yaml ```yaml
{ {
...@@ -26,9 +26,18 @@ Take the first line as an example. `dropout_rate` is defined as a variable whose ...@@ -26,9 +26,18 @@ Take the first line as an example. `dropout_rate` is defined as a variable whose
All types of sampling strategies and their parameter are listed here: All types of sampling strategies and their parameter are listed here:
* {"_type":"choice","_value":options} * {"_type":"choice","_value":options}
* Which means the variable value is one of the options, which should be a list. The elements of options can themselves be [nested] stochastic expressions. In this case, the stochastic choices that only appear in some of the options become conditional parameters.
* Which means the variable's value is one of the options. Here 'options' should be a list. Each element of options is a number of string. It could also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space could be seen as conditional variables.
* An simple [example](../../examples/trials/mnist-cascading-search-space/search_space.json) of [nested] search space definition. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a key '_name' in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](../../examples/trials/mnist-cascading-search-space/sample.json) which users can get from nni with nested search space definition. Tuners which support nested search space is as follows:
- Random Search
- TPE
- Anneal
- Evolution
* {"_type":"randint","_value":[upper]} * {"_type":"randint","_value":[upper]}
* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now. * Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now.
* {"_type":"uniform","_value":[low, high]} * {"_type":"uniform","_value":[low, high]}
...@@ -48,6 +57,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -48,6 +57,7 @@ All types of sampling strategies and their parameter are listed here:
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below. * Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
* {"_type":"normal","_value":[mu, sigma]} * {"_type":"normal","_value":[mu, sigma]}
* Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable. * Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.
* {"_type":"qnormal","_value":[mu, sigma, q]} * {"_type":"qnormal","_value":[mu, sigma, q]}
...@@ -55,6 +65,7 @@ All types of sampling strategies and their parameter are listed here: ...@@ -55,6 +65,7 @@ All types of sampling strategies and their parameter are listed here:
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded. * Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
* {"_type":"lognormal","_value":[mu, sigma]} * {"_type":"lognormal","_value":[mu, sigma]}
* Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive. * Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
* {"_type":"qlognormal","_value":[mu, sigma, q]} * {"_type":"qlognormal","_value":[mu, sigma, q]}
......
...@@ -63,4 +63,4 @@ After the code changes, use **step 3** to rebuild your codes, then the changes w ...@@ -63,4 +63,4 @@ After the code changes, use **step 3** to rebuild your codes, then the changes w
--- ---
At last, wish you have a wonderful day. At last, wish you have a wonderful day.
For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [CONTRIBUTING](./CONTRIBUTING.md) document. For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [Contributing](./Contributing.md) document.
...@@ -41,14 +41,14 @@ RECEIVED_PARAMS = nni.get_next_parameter() ...@@ -41,14 +41,14 @@ RECEIVED_PARAMS = nni.get_next_parameter()
```python ```python
nni.report_intermediate_result(metrics) nni.report_intermediate_result(metrics)
``` ```
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](Builtin_Assessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy. `metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
- Report performance of the configuration - Report performance of the configuration
```python ```python
nni.report_final_result(metrics) nni.report_final_result(metrics)
``` ```
`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](Builtin_Tuner.md). `metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](BuiltinTuner.md).
### Step 3 - Enable NNI API ### Step 3 - Enable NNI API
...@@ -156,8 +156,8 @@ For more information, please refer to [HowToDebug](HowToDebug.md) ...@@ -156,8 +156,8 @@ For more information, please refer to [HowToDebug](HowToDebug.md)
<a name="more-examples"></a> <a name="more-examples"></a>
## More Trial Examples ## More Trial Examples
* [MNIST examples](mnist_examples.md) * [MNIST examples](MnistExamples.md)
* [Finding out best optimizer for Cifar10 classification](cifar10_examples.md) * [Finding out best optimizer for Cifar10 classification](Cifar10Examples.md)
* [How to tune Scikit-learn on NNI](sklearn_examples.md) * [How to tune Scikit-learn on NNI](SklearnExamples.md)
* [Automatic Model Architecture Search for Reading Comprehension.](SQuAD_evolution_examples.md) * [Automatic Model Architecture Search for Reading Comprehension.](SquadEvolutionExamples.md)
* [Tuning GBDT on NNI](gbdt_example.md) * [Tuning GBDT on NNI](GbdtExample.md)
...@@ -2,5 +2,5 @@ Advanced Features ...@@ -2,5 +2,5 @@ Advanced Features
===================== =====================
.. toctree:: .. toctree::
MultiPhase<multiPhase> MultiPhase<MultiPhase>
AdvancedNAS AdvancedNas<AdvancedNas>
\ No newline at end of file \ No newline at end of file
...@@ -15,5 +15,5 @@ Like Tuners, users can either use built-in Assessors, or customize an Assessor o ...@@ -15,5 +15,5 @@ Like Tuners, users can either use built-in Assessors, or customize an Assessor o
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
Builtin Assessors<builtinAssessor> Builtin Assessors<BuiltinAssessor>
Customized Assessors<Customize_Assessor> Customized Assessors<CustomizeAssessor>
Builtin-Tuners
==================
.. toctree::
:maxdepth: 1
Overview<Builtin_Tuner>
TPE<hyperoptTuner>
Random Search<hyperoptTuner>
Anneal<hyperoptTuner>
Naive Evolution<evolutionTuner>
SMAC<smacTuner>
Batch Tuner<batchTuner>
Grid Search<gridsearchTuner>
Hyperband<hyperbandAdvisor>
Network Morphism<networkmorphismTuner>
Metis Tuner<metisTuner>
BOHB<bohbAdvisor>
\ No newline at end of file
...@@ -4,6 +4,6 @@ Builtin-Assessors ...@@ -4,6 +4,6 @@ Builtin-Assessors
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
Overview<Builtin_Assessors> Overview<BuiltinAssessors>
Medianstop<medianstopAssessor> Medianstop<MedianstopAssessor>
Curvefitting<curvefittingAssessor> Curvefitting<CurvefittingAssessor>
\ No newline at end of file \ No newline at end of file
Builtin-Tuners
==================
.. toctree::
:maxdepth: 1
Overview<BuiltinTuner>
TPE<HyperoptTuner>
Random Search<HyperoptTuner>
Anneal<HyperoptTuner>
Naive Evolution<EvolutionTuner>
SMAC<SmacTuner>
Batch Tuner<BatchTuner>
Grid Search<GridsearchTuner>
Hyperband<HyperbandAdvisor>
Network Morphism<NetworkmorphismTuner>
Metis Tuner<MetisTuner>
BOHB<BohbAdvisor>
\ No newline at end of file
...@@ -3,5 +3,5 @@ Contribute to NNI ...@@ -3,5 +3,5 @@ Contribute to NNI
############################### ###############################
.. toctree:: .. toctree::
Development Setup<SetupNNIDeveloperEnvironment> Development Setup<SetupNniDeveloperEnvironment>
Contribution Guide<CONTRIBUTING> Contribution Guide<Contributing>
\ No newline at end of file \ No newline at end of file
...@@ -5,8 +5,8 @@ Examples ...@@ -5,8 +5,8 @@ Examples
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
MNIST<mnist_examples> MNIST<MnistExamples>
Cifar10<cifar10_examples> Cifar10<Cifar10Examples>
Scikit-learn<sklearn_examples> Scikit-learn<SklearnExamples>
EvolutionSQuAD<SQuAD_evolution_examples> EvolutionSQuAD<SquadEvolutionExamples>
GBDT<gbdt_example> GBDT<GbdtExample>
...@@ -13,10 +13,10 @@ Contents ...@@ -13,10 +13,10 @@ Contents
Overview Overview
QuickStart<QuickStart> QuickStart<QuickStart>
Tutorials Tutorials<tutorials>
Examples Examples<examples>
Reference Reference<reference>
FAQ FAQ
Contribution Contribution<contribution>
Changelog<RELEASE> Changelog<Release>
Blog<Blog/index> Blog<Blog/index>
...@@ -4,7 +4,7 @@ References ...@@ -4,7 +4,7 @@ References
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 3
Command Line <NNICTLDOC> Command Line <Nnictl>
Python API <sdk_reference> Python API <sdk_reference>
Annotation <AnnotationSpec> Annotation <AnnotationSpec>
Configuration<ExperimentConfig> Configuration<ExperimentConfig>
......
...@@ -4,6 +4,6 @@ Introduction to NNI Training Services ...@@ -4,6 +4,6 @@ Introduction to NNI Training Services
.. toctree:: .. toctree::
Local<LocalMode> Local<LocalMode>
Remote<RemoteMachineMode> Remote<RemoteMachineMode>
OpenPAI<PAIMode> OpenPAI<PaiMode>
Kubeflow<KubeflowMode> Kubeflow<KubeflowMode>
FrameworkController<FrameworkControllerMode> FrameworkController<FrameworkControllerMode>
\ No newline at end of file
...@@ -13,6 +13,6 @@ For details, please refer to the following tutorials: ...@@ -13,6 +13,6 @@ For details, please refer to the following tutorials:
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
Builtin Tuners<builtinTuner> Builtin Tuners<BuiltinTuner>
Customized Tuners<Customize_Tuner> Customized Tuners<CustomizeTuner>
Customized Advisor<Customize_Advisor> Customized Advisor<CustomizeAdvisor>
\ No newline at end of file \ No newline at end of file
...@@ -131,21 +131,29 @@ def main(params): ...@@ -131,21 +131,29 @@ def main(params):
nni.report_final_result(test_acc) nni.report_final_result(test_acc)
def generate_defualt_params(): def get_params():
params = {'data_dir': '/tmp/tensorflow/mnist/input_data', ''' Get parameters from command line '''
'batch_num': 1000, parser = argparse.ArgumentParser()
'batch_size': 200} parser.add_argument("--data_dir", type=str, default='/tmp/tensorflow/mnist/input_data', help="data directory")
return params parser.add_argument("--batch_num", type=int, default=1000)
parser.add_argument("--batch_size", type=int, default=200)
args, _ = parser.parse_known_args()
return args
def parse_init_json(data): def parse_init_json(data):
params = {} params = {}
for key in data: for key in data:
value = data[key] value = data[key]
if value == 'Empty': layer_name = value["_name"]
if layer_name == 'Empty':
# Empty Layer
params[key] = ['Empty'] params[key] = ['Empty']
elif layer_name == 'Conv':
# Conv layer
params[key] = [layer_name, value['kernel_size'], value['kernel_size']]
else: else:
params[key] = [value[0], value[1], value[1]] # Pooling Layer
params[key] = [layer_name, value['pooling_size'], value['pooling_size']]
return params return params
...@@ -157,7 +165,7 @@ if __name__ == '__main__': ...@@ -157,7 +165,7 @@ if __name__ == '__main__':
RCV_PARAMS = parse_init_json(data) RCV_PARAMS = parse_init_json(data)
logger.debug(RCV_PARAMS) logger.debug(RCV_PARAMS)
params = generate_defualt_params() params = vars(get_params())
params.update(RCV_PARAMS) params.update(RCV_PARAMS)
print(RCV_PARAMS) print(RCV_PARAMS)
......
{ {
"layer2": "Empty", "layer0": {
"layer8": ["Conv", 2], "_name": "Avg_pool",
"layer3": ["Avg_pool", 5], "pooling_size": 3
"layer0": ["Max_pool", 5], },
"layer1": ["Conv", 2], "layer1": {
"layer6": ["Max_pool", 3], "_name": "Conv",
"layer7": ["Max_pool", 5], "kernel_size": 2
"layer9": ["Conv", 2], },
"layer4": ["Avg_pool", 3], "layer2": {
"layer5": ["Avg_pool", 5] "_name": "Empty"
} },
"layer3": {
"_name": "Conv",
"kernel_size": 5
}
}
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment