Unverified Commit bfd99ad5 authored by xuehui's avatar xuehui Committed by GitHub
Browse files

Fix Issue #903 (#916)

* update readme in ga_squad

* update readme

* fix typo

* Update README.md

* Update README.md

* Update README.md

* update readme

* update

* fix path

* update reference

* fix bug in config file

* update nni_arch_overview.png

* update

* update

* update

* update home page

* fix issue: #902, random parameter

* update doc about random

* change Random tuner test

* update metric_test for Random tuner

* update random config in test

* update Evolution_SQuAD doc
parent 7d7387cb
...@@ -73,8 +73,6 @@ Random search is suggested when each trial does not take too long (e.g., each tr ...@@ -73,8 +73,6 @@ Random search is suggested when each trial does not take too long (e.g., each tr
# config.yml # config.yml
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
``` ```
<br> <br>
...@@ -115,10 +113,6 @@ tuner: ...@@ -115,10 +113,6 @@ tuner:
Its requirement of computation resource is relatively high. Specifically, it requires large initial population to avoid falling into local optimum. If your trial is short or leverages assessor, this tuner is a good choice. And, it is more suggested when your trial code supports weight transfer, that is, the trial could inherit the converged weights from its parent(s). This can greatly speed up the training progress. Its requirement of computation resource is relatively high. Specifically, it requires large initial population to avoid falling into local optimum. If your trial is short or leverages assessor, this tuner is a good choice. And, it is more suggested when your trial code supports weight transfer, that is, the trial could inherit the converged weights from its parent(s). This can greatly speed up the training progress.
**Requirement of classArg**
* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', tuners will return the hyperparameter set with larger expectation. If 'minimize', tuner will return the hyperparameter set with smaller expectation.
**Usage example** **Usage example**
```yaml ```yaml
......
...@@ -2,8 +2,7 @@ ...@@ -2,8 +2,7 @@
This example shows us how to use Genetic Algorithm to find good model architectures for Reading Comprehension. This example shows us how to use Genetic Algorithm to find good model architectures for Reading Comprehension.
## 1. Search Space ## 1. Search Space
Since attention and recurrent neural network (RNN) have been proven effective in Reading Comprehension. Since attention and RNN have been proven effective in Reading Comprehension, we conclude the search space as follow:
We conclude the search space as follow:
1. IDENTITY (Effectively means keep training). 1. IDENTITY (Effectively means keep training).
2. INSERT-RNN-LAYER (Inserts a LSTM. Comparing the performance of GRU and LSTM in our experiment, we decided to use LSTM here.) 2. INSERT-RNN-LAYER (Inserts a LSTM. Comparing the performance of GRU and LSTM in our experiment, we decided to use LSTM here.)
......
...@@ -158,7 +158,7 @@ class HyperoptTuner(Tuner): ...@@ -158,7 +158,7 @@ class HyperoptTuner(Tuner):
HyperoptTuner is a tuner which using hyperopt algorithm. HyperoptTuner is a tuner which using hyperopt algorithm.
""" """
def __init__(self, algorithm_name, optimize_mode): def __init__(self, algorithm_name, optimize_mode = 'minimize'):
""" """
Parameters Parameters
---------- ----------
......
...@@ -7,8 +7,6 @@ searchSpacePath: ./cifar10_search_space.json ...@@ -7,8 +7,6 @@ searchSpacePath: ./cifar10_search_space.json
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
assessor: assessor:
builtinAssessorName: Medianstop builtinAssessorName: Medianstop
classArgs: classArgs:
......
...@@ -6,8 +6,6 @@ trialConcurrency: 2 ...@@ -6,8 +6,6 @@ trialConcurrency: 2
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
assessor: assessor:
builtinAssessorName: Medianstop builtinAssessorName: Medianstop
classArgs: classArgs:
......
...@@ -7,8 +7,6 @@ searchSpacePath: ../../../examples/trials/mnist-keras/search_space.json ...@@ -7,8 +7,6 @@ searchSpacePath: ../../../examples/trials/mnist-keras/search_space.json
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
assessor: assessor:
builtinAssessorName: Medianstop builtinAssessorName: Medianstop
classArgs: classArgs:
......
...@@ -7,8 +7,6 @@ searchSpacePath: ./mnist_search_space.json ...@@ -7,8 +7,6 @@ searchSpacePath: ./mnist_search_space.json
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
assessor: assessor:
builtinAssessorName: Medianstop builtinAssessorName: Medianstop
classArgs: classArgs:
......
...@@ -7,8 +7,6 @@ searchSpacePath: ../../../examples/trials/sklearn/classification/search_space.js ...@@ -7,8 +7,6 @@ searchSpacePath: ../../../examples/trials/sklearn/classification/search_space.js
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
assessor: assessor:
builtinAssessorName: Medianstop builtinAssessorName: Medianstop
classArgs: classArgs:
......
...@@ -7,8 +7,6 @@ searchSpacePath: ../../../examples/trials/sklearn/regression/search_space.json ...@@ -7,8 +7,6 @@ searchSpacePath: ../../../examples/trials/sklearn/regression/search_space.json
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
assessor: assessor:
builtinAssessorName: Medianstop builtinAssessorName: Medianstop
classArgs: classArgs:
......
...@@ -6,8 +6,6 @@ trialConcurrency: 1 ...@@ -6,8 +6,6 @@ trialConcurrency: 1
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
assessor: assessor:
builtinAssessorName: Medianstop builtinAssessorName: Medianstop
classArgs: classArgs:
......
...@@ -7,8 +7,6 @@ searchSpacePath: ./search_space.json ...@@ -7,8 +7,6 @@ searchSpacePath: ./search_space.json
tuner: tuner:
builtinTunerName: Random builtinTunerName: Random
classArgs:
optimize_mode: maximize
trial: trial:
codeDir: . codeDir: .
......
...@@ -38,7 +38,7 @@ def switch(dispatch_type, dispatch_name): ...@@ -38,7 +38,7 @@ def switch(dispatch_type, dispatch_name):
'''Change dispatch in config.yml''' '''Change dispatch in config.yml'''
config_path = 'tuner_test/local.yml' config_path = 'tuner_test/local.yml'
experiment_config = get_yml_content(config_path) experiment_config = get_yml_content(config_path)
if dispatch_name in ['GridSearch', 'BatchTuner']: if dispatch_name in ['GridSearch', 'BatchTuner', 'Random']:
experiment_config[dispatch_type.lower()] = { experiment_config[dispatch_type.lower()] = {
'builtin' + dispatch_type + 'Name': dispatch_name 'builtin' + dispatch_type + 'Name': dispatch_name
} }
......
...@@ -54,14 +54,14 @@ Optional('advisor'): Or({ ...@@ -54,14 +54,14 @@ Optional('advisor'): Or({
Optional('gpuNum'): And(int, lambda x: 0 <= x <= 99999), Optional('gpuNum'): And(int, lambda x: 0 <= x <= 99999),
}), }),
Optional('tuner'): Or({ Optional('tuner'): Or({
'builtinTunerName': Or('TPE', 'Random', 'Anneal', 'SMAC', 'Evolution'), 'builtinTunerName': Or('TPE', 'Anneal', 'SMAC', 'Evolution'),
Optional('classArgs'): { Optional('classArgs'): {
'optimize_mode': Or('maximize', 'minimize') 'optimize_mode': Or('maximize', 'minimize')
}, },
Optional('includeIntermediateResults'): bool, Optional('includeIntermediateResults'): bool,
Optional('gpuNum'): And(int, lambda x: 0 <= x <= 99999), Optional('gpuNum'): And(int, lambda x: 0 <= x <= 99999),
},{ },{
'builtinTunerName': Or('BatchTuner', 'GridSearch'), 'builtinTunerName': Or('BatchTuner', 'GridSearch', 'Random'),
Optional('gpuNum'): And(int, lambda x: 0 <= x <= 99999), Optional('gpuNum'): And(int, lambda x: 0 <= x <= 99999),
},{ },{
'builtinTunerName': 'NetworkMorphism', 'builtinTunerName': 'NetworkMorphism',
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment