Unverified Commit 32efaa36 authored by SparkSnail's avatar SparkSnail Committed by GitHub
Browse files

Merge pull request #219 from microsoft/master

merge master
parents cd3a912a 97b258b0
......@@ -18,7 +18,7 @@ NNI (Neural Network Intelligence) is a toolkit to help users run automated machi
The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.
### **NNI v1.1 has been released! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
### **NNI v1.2 has been released! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
<p align="center">
<a href="#nni-has-been-released"><img src="docs/img/overview.svg" /></a>
......@@ -34,7 +34,7 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
<img src="docs/img/bar.png"/>
</td>
<td>
<b>Tuning Algorithms</b>
<b>Algorithms</b>
<img src="docs/img/bar.png"/>
</td>
<td>
......@@ -82,14 +82,9 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
</td>
<td align="left" >
<a href="docs/en_US/Tuner/BuiltinTuner.md">Tuner</a>
<ul>
<li><b>General Tuner</b></li>
<ul>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#Random">Random Search</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#Evolution">Naïve Evolution</a></li>
</ul>
<li><b>Tuner for <a href="docs/en_US/CommunitySharings/HpoComparision.md">HPO</a></b></li>
<ul>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#TPE">TPE</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#Anneal">Anneal</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#SMAC">SMAC</a></li>
......@@ -99,19 +94,33 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#MetisTuner">Metis Tuner</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#BOHB">BOHB</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#GPTuner">GP Tuner</a></li>
</ul>
<li><b>Tuner for <a href="docs/en_US/AdvancedFeature/GeneralNasInterfaces.md">NAS</a></b></li>
<ul>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#PPOTuner">PPO Tuner</a></li>
<li><a href="docs/en_US/Tuner/BuiltinTuner.md#NetworkMorphism">Network Morphism</a></li>
<li><a href="examples/tuners/enas_nni/README.md">ENAS</a></li>
</ul>
</ul>
<a href="docs/en_US/Assessor/BuiltinAssessor.md">Assessor</a>
<ul>
<ul>
<li><a href="docs/en_US/Assessor/BuiltinAssessor.md#Medianstop">Median Stop</a></li>
<li><a href="docs/en_US/Assessor/BuiltinAssessor.md#Curvefitting">Curve Fitting</a></li>
</ul>
<a href="docs/en_US/NAS/Overview.md">NAS (Beta)</a>
<ul>
<li><a href="docs/en_US/NAS/Overview.md#enas">ENAS</a></li>
<li><a href="docs/en_US/NAS/Overview.md#darts">DARTS</a></li>
<li><a href="docs/en_US/NAS/Overview.md#p-darts">P-DARTS</a></li>
</ul>
<a href="docs/en_US/Compressor/Overview.md">Model Compression (Beta)</a>
<ul>
<li><a href="docs/en_US/Compressor/Pruner.md#agp-pruner">AGP Pruner</a></li>
<li><a href="docs/en_US/Compressor/Pruner.md#slim-pruner">Slim Pruner</a></li>
<li><a href="docs/en_US/Compressor/Pruner.md#fpgm-pruner">FPGM Pruner</a></li>
<li><a href="docs/en_US/Compressor/Quantizer.md#qat-quantizer">QAT Quantizer</a></li>
<li><a href="docs/en_US/Compressor/Quantizer.md#dorefa-quantizer">DoReFa Quantizer</a></li>
<li><a href="docs/en_US/Compressor/Overview.md">More...</a></li>
</ul>
<a href="docs/en_US/FeatureEngineering/Overview.md">Feature Engineering (Beta)</a>
<ul>
<li><a href="docs/en_US/FeatureEngineering/GradientFeatureSelector.md">GradientFeatureSelector</a></li>
<li><a href="docs/en_US/FeatureEngineering/GBDTSelector.md">GBDTSelector</a></li>
</ul>
</td>
<td>
......@@ -211,7 +220,7 @@ Linux and MacOS
* Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`.
```bash
git clone -b v1.1 https://github.com/Microsoft/nni.git
git clone -b v1.2 https://github.com/Microsoft/nni.git
cd nni
source install.sh
```
......@@ -221,7 +230,7 @@ Windows
* Run the following commands in an environment that has `python >=3.5`, `git` and `PowerShell`
```bash
git clone -b v1.1 https://github.com/Microsoft/nni.git
git clone -b v1.2 https://github.com/Microsoft/nni.git
cd nni
powershell -ExecutionPolicy Bypass -file install.ps1
```
......@@ -237,7 +246,7 @@ The following example is an experiment built on TensorFlow. Make sure you have *
* Download the examples via clone the source code.
```bash
git clone -b v1.1 https://github.com/Microsoft/nni.git
git clone -b v1.2 https://github.com/Microsoft/nni.git
```
Linux and MacOS
......
......@@ -38,8 +38,8 @@ jobs:
displayName: 'Run pylint'
- script: |
python3 -m pip install flake8 --user
IGNORE=./tools/nni_annotation/testcase/*:F821,./examples/trials/mnist-nas/*/mnist*.py:F821,./examples/trials/nas_cifar10/src/cifar10/general_child.py:F821
python3 -m flake8 . --count --per-file-ignores=$IGNORE --select=E9,F63,F72,F82 --show-source --statistics
EXCLUDES=./src/nni_manager/,./tools/nni_annotation/testcase/,./examples/trials/mnist-nas/*/mnist*.py,./examples/trials/nas_cifar10/src/cifar10/general_child.py
python3 -m flake8 . --count --exclude=$EXCLUDES --select=E9,F63,F72,F82 --show-source --statistics
displayName: 'Run flake8 tests to find Python syntax errors and undefined names'
- script: |
cd test
......
......@@ -62,7 +62,7 @@ setuptools.setup(
'scipy',
'coverage',
'colorama',
'sklearn'
'scikit-learn==0.20'
],
classifiers = [
'Programming Language :: Python :: 3',
......
......@@ -241,17 +241,17 @@ print("Pipeline Score: ", pipeline.score(X_train, y_train))
# Benchmark
`Baseline` means without any feature selection, we directly pass the data to LogisticRegression. For this benchmark, we only use 10% data from the train as test data.
| Dataset | Baseline | GradientFeatureSelector | TreeBasedClassifier | #Train | #Feature |
| ----------- | ------ | ------ | ------- | ------- | -------- |
| colon-cancer | 0.7547 | 0.7368 | 0.7223 | 62 | 2,000 |
| gisette | 0.9725 | 0.89416 | 0.9792 | 6,000 | 5,000 |
| avazu | 0.8834 | N/A | N/A | 40,428,967 | 1,000,000 |
| rcv1 | 0.9644 | 0.7333 | 0.9615 | 20,242 | 47,236 |
| news20.binary | 0.9208 | 0.6870 | 0.9070 | 19,996 | 1,355,191 |
| real-sim | 0.9681 | 0.7969 | 0.9591 | 72,309 | 20,958 |
The benchmark could be download in [here](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
`Baseline` means without any feature selection, we directly pass the data to LogisticRegression. For this benchmark, we only use 10% data from the train as test data. For the GradientFeatureSelector, we only take the top20 features. The metric is the mean accuracy on the given test data and labels.
| Dataset | Baseline | GradientFeatureSelector top20 | GradientFeatureSelector auto | TreeBasedClassifier | #Train | #Feature |
| ----------- | ------ | ------ | ------- | ------- | -------- |-------- |
| colon-cancer | 0.7547 | 0.7368 | 0.5389 | 0.7223 | 62 | 2,000 |
| gisette | 0.9725 | 0.9241 | 0.9658 |0.9792 | 6,000 | 5,000 |
| rcv1 | 0.9644 | 0.7333 | 0.9548 |0.9615 | 20,242 | 47,236 |
| news20.binary | 0.9208 | 0.8780 | 0.8875 | 0.9070 | 19,996 | 1,355,191 |
| real-sim | 0.9681 | 0.7969 | 0.9439 |0.9591 | 72,309 | 20,958 |
The dataset of benchmark could be download in [here](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
)
The code could be refenrence `/examples/feature_engineering/gradient_feature_selector/benchmark_test.py`.
# DARTS on NNI
## Introduction
The paper [DARTS: Differentiable Architecture Search](https://arxiv.org/abs/1806.09055) addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Their method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent
To implement, authors optimize the network weights and architecture weights alternatively in mini-batches. They further explore the possibility that uses second order optimization (unroll) instead of first order, to improve the performance.
Implementation on NNI is based on the [official implementation](https://github.com/quark0/darts) and a [popular 3rd-party repo](https://github.com/khanrc/pt.darts). So far, first and second order optimization and training from scratch on CIFAR10 have been implemented.
## Reproduce Results
To reproduce the results in the paper, we do experiments with first and second order optimization. Due to the time limit, we retrain *only the best architecture* derived from the search phase and we repeat the experiment *only once*. Our results is currently on par with the results reported in paper. We will add more results later when ready.
| | In paper | Reproduction |
| ---------------------- | ------------- | ------------ |
| First order (CIFAR10) | 3.00 +/- 0.14 | 2.78 |
| Second order (CIFAR10) | 2.76 +/- 0.09 | 2.89 |
# ENAS on NNI
## Introduction
The paper [Efficient Neural Architecture Search via Parameter Sharing](https://arxiv.org/abs/1802.03268) uses parameter sharing between child models to accelerate the NAS process. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss.
Implementation on NNI is based on the [official implementation in Tensorflow](https://github.com/melodyguan/enas), macro and micro search space on CIFAR10 included. Since code to train from scratch on NNI is not ready yet, reproduction results are currently unavailable.
......@@ -55,7 +55,7 @@ def forward(self, x):
out = self.input_switch([in_tensor1, in_tensor2, in_tensor3])
...
```
`InputChoice` is a PyTorch module, in init, it needs meta information, for example, from how many input candidates to choose how many inputs, the name of this initialized `InputChoice`. The real candidate input tensors can only be obtained in `forward` function. In `forward`, `InputChoice` instance is called with real candidate input tensors.
`InputChoice` is a PyTorch module, in init, it needs meta information, for example, from how many input candidates to choose how many inputs, and the name of this initialized `InputChoice`. The real candidate input tensors can only be obtained in `forward` function. In the `forward` function, the `InputChoice` module you create in `__init__` (e.g., `self.input_switch`) is called with real candidate input tensors.
Some [NAS trainers](#one-shot-training-mode) need to know the source layer the input tensors, thus, we add one input argument `choose_from` in `InputChoice` to indicate the source layer of each candidate input. `choose_from` is a list of string, each element is `key` of `LayerChoice` and `InputChoice` or the name of a module (refer to [the code](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/nas/pytorch/mutables.py) for more details).
......@@ -102,8 +102,6 @@ Different trainers could have different input arguments depending on their algor
The supported trainers can be found [here](./Overview.md#supported-one-shot-nas-algorithms). A very simple example using NNI NAS API can be found [here](https://github.com/microsoft/nni/tree/master/examples/nas/simple/train.py).
The complete example code can be found [here]().
### Classic distributed search
Neural architecture search is originally executed by running each child model independently as a trial job. We also support this searching approach, and it naturally fits in NNI hyper-parameter tuning framework, where tuner generates child model for next trial and trials run in training service.
......
......@@ -6,11 +6,11 @@ However, it takes great efforts to implement NAS algorithms, and it is hard to r
With this motivation, our ambition is to provide a unified architecture in NNI, to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster.
With [the unified interface](.NasInterface.md), there are two different modes for the architecture search. [The one](#supported-one-shot-nas-algorithms) is the so-called one-shot NAS, where a super-net is built based on search space, and using one shot training to generate good-performing child model. [The other](.ClassicNas.md) is the traditional searching approach, where each child model in search space runs as an independent trial, the performance result is sent to tuner and the tuner generates new child model.
With [the unified interface](./NasInterface.md), there are two different modes for the architecture search. [The one](#supported-one-shot-nas-algorithms) is the so-called one-shot NAS, where a super-net is built based on search space, and using one shot training to generate good-performing child model. [The other](./NasInterface.md#classic-distributed-search) is the traditional searching approach, where each child model in search space runs as an independent trial, the performance result is sent to tuner and the tuner generates new child model.
* [Supported One-shot NAS Algorithms](#supported-one-shot-nas-algorithms)
* [Classic Distributed NAS with NNI experiment](.NasInterface.md#classic-distributed-search)
* [NNI NAS Programming Interface](.NasInterface.md)
* [Classic Distributed NAS with NNI experiment](./NasInterface.md#classic-distributed-search)
* [NNI NAS Programming Interface](./NasInterface.md)
## Supported One-shot NAS Algorithms
......@@ -37,7 +37,7 @@ Note, these algorithms run **standalone without nnictl**, and supports PyTorch o
#### Usage
ENAS in NNI is still under development and we only support search phase for macro/micro search space on CIFAR10. Training from scratch and search space on PTB has not been finished yet.
ENAS in NNI is still under development and we only support search phase for macro/micro search space on CIFAR10. Training from scratch and search space on PTB has not been finished yet. [Detailed Description](ENAS.md)
```bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
......@@ -58,7 +58,7 @@ python3 search.py -h
### DARTS
The main contribution of [DARTS: Differentiable Architecture Search][3] on algorithm is to introduce a novel algorithm for differentiable network architecture search on bilevel optimization.
The main contribution of [DARTS: Differentiable Architecture Search][3] on algorithm is to introduce a novel algorithm for differentiable network architecture search on bilevel optimization. [Detailed Description](DARTS.md)
#### Usage
......
......@@ -46,6 +46,33 @@ For each experiment, user only needs to define a search space and update a few l
More details about how to run an experiment, please refer to [Get Started](Tutorial/QuickStart.md).
## Core Features
NNI provides a key capacity to run multiple instances in parallel to find best combinations of parameters. This feature can be used in various domains, like find best hyperparameters for a deep learning model, or find best configuration for database and other complex system with real data.
NNI is also like to provide algorithm toolkits for machine learning and deep learning, especially neural architecture search (NAS) algorithms, model compression algorithms, and feature engineering algorithms.
### Hyperparameter Tuning
This is a core and basic feature of NNI, we provide many popular [automatic tuning algorithms](Tuner/BuiltinTuner.md) (i.e., tuner) and [early stop algorithms](Assessor/BuiltinAssessor.md) (i.e., assessor). You could follow [Quick Start](Tutorial/QuickStart.md) to tune your model (or system). Basically, there are the above three steps and then start an NNI experiment.
### General NAS Framework
This NAS framework is for users to easily specify candidate neural architectures, for example, could specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides simple interface for another type of users (e.g., NAS algorithm researchers) to implement new NAS algorithms. Detailed description and usage can be found [here](NAS/Overview.md).
NNI has supported many one-shot NAS algorithms, such as ENAS, DARTS, through NNI trial SDK. To use these algorithms you do not have to start an NNI experiment. Instead, to import an algorithm in your trial code, and simply run your trial code. If you want to tune the hyperparameters in the algorithms or want to run multiple instances, you could choose a tuner and start an NNI experiment.
Other than one-shot NAS, NAS can also run in a classic mode where each candidate architecture runs as an independent trial job. In this mode, similar to hyperparameter tuning, users have to start an NNI experiment and choose a tuner for NAS.
### Model Compression
Model Compression on NNI includes pruning algorithms and quantization algorithms. These algorithms are provided through NNI trial SDK. Users could directly use them in their trial code and run the trial code without starting an NNI experiment. Detailed description and usage can be found [here](Compressor/Overview.md).
There are different types of hyperparamters in model compression. One type is the hyperparameters in input configuration, e.g., sparsity, quantization bits, to a compression algorithm. The other type is the hyperparamters in compression algorithms. Here, Hyperparameter tuning of NNI could help a lot in finding the best compressed model automatically. A simple example can be found [here](Compressor/AutoCompression.md).
### Automatic Feature Engineering
Automatic feature engineering is for users to find the best features for the following tasks. Detailed description and usage can be found [here](FeatureEngineering/Overview.md). It is supported through NNI trial SDK, which means you do not have to create an NNI experiment. Instead, simply import a built-in auto-feature-engineering algorithm in your trial code and directly run your trial code.
The auto-feature-engineering algorithms usually have a bunch of hyperparameters themselves. If you want to automatically tune those hyperparameters, you can leverage hyperparameter tuning of NNI, that is, choose a tuning algorithm (i.e., tuner) and start an NNI experiment for it.
## Learn More
* [Get started](Tutorial/QuickStart.md)
* [How to adapt your trial code on NNI?](TrialExample/Trials.md)
......@@ -57,3 +84,6 @@ More details about how to run an experiment, please refer to [Get Started](Tutor
* [How to run an experiment on multiple machines?](TrainingService/RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](TrainingService/PaiMode.md)
* [Examples](TrialExample/MnistExamples.md)
* [Neural Architecture Search on NNI](NAS/Overview.md)
* [Automatic model compression on NNI](Compressor/Overview.md)
* [Automatic feature engineering on NNI](FeatureEngineering/Overview.md)
\ No newline at end of file
# ChangeLog
## Release 1.2 - 12/02/2019
### Major Features
* [Feature Engineering](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/FeatureEngineering/Overview.md)
- New feature engineering interface
- Feature selection algorithms: [Gradient feature selector](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/FeatureEngineering/GradientFeatureSelector.md) & [GBDT selector](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/FeatureEngineering/GBDTSelector.md)
- [Examples for feature engineering](https://github.com/microsoft/nni/tree/v1.2/examples/feature_engineering)
* Neural Architecture Search (NAS) on NNI
- [New NAS interface](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/NAS/NasInterface.md)
- NAS algorithms: [ENAS](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/NAS/Overview.md#enas), [DARTS](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/NAS/Overview.md#darts), [P-DARTS](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/NAS/Overview.md#p-darts) (in PyTorch)
- NAS in classic mode (each trial runs independently)
* Model compression
- [New model pruning algorithms](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/Compressor/Overview.md): lottery ticket pruning approach, L1Filter pruner, Slim pruner, FPGM pruner
- [New model quantization algorithms](https://github.com/microsoft/nni/blob/v1.2/docs/en_US/Compressor/Overview.md): QAT quantizer, DoReFa quantizer
- Support the API for exporting compressed model.
* Training Service
- Support OpenPAI token authentication
* Examples:
- [An example to automatically tune rocksdb configuration with NNI](https://github.com/microsoft/nni/tree/v1.2/examples/trials/systems/rocksdb-fillrandom).
- [A new MNIST trial example supports tensorflow 2.0](https://github.com/microsoft/nni/tree/v1.2/examples/trials/mnist-tfv2).
* Engineering Improvements
- For remote training service, trial jobs require no GPU are now scheduled with round-robin policy instead of random.
- Pylint rules added to check pull requests, new pull requests need to comply with these [pylint rules](https://github.com/microsoft/nni/blob/v1.2/pylintrc).
* Web Portal & User Experience
- Support user to add customized trial.
- User can zoom out/in in detail graphs, except Hyper-parameter.
* Documentation
- Improved NNI API documentation with more API docstring.
### Bug fix
- Fix the table sort issue when failed trials haven't metrics. -Issue #1773
- Maintain selected status(Maximal/Minimal) when the page switched. -PR#1710
- Make hyper-parameters graph's default metric yAxis more accurate. -PR#1736
- Fix GPU script permission issue. -Issue #1665
## Release 1.1 - 10/23/2019
### Major Features
* New tuner: [PPO Tuner](https://github.com/microsoft/nni/blob/v1.1/docs/en_US/Tuner/PPOTuner.md)
* [View stopped experiments](https://github.com/microsoft/nni/blob/v1.1/docs/en_US/Tutorial/Nnictl.md#view)
* Tuners can now use dedicated GPU resource (see `gpuIndices` in [tutorial](https://github.com/microsoft/nni/blob/v1.1/docs/en_US/Tutorial/ExperimentConfig.md) for details)
* Web UI improvements
- Trials detail page can now list hyperparameters of each trial, as well as their start and end time (via "add column")
- Viewing huge experiment is now less laggy
* More examples
- [EfficientNet PyTorch example](https://github.com/ultmaster/EfficientNet-PyTorch)
- [Cifar10 NAS example](https://github.com/microsoft/nni/blob/v1.1/examples/trials/nas_cifar10/README.md)
* [Model compression toolkit - Alpha release](https://github.com/microsoft/nni/blob/v1.1/docs/en_US/Compressor/Overview.md): We are glad to announce the alpha release for model compression toolkit on top of NNI, it's still in the experiment phase which might evolve based on usage feedback. We'd like to invite you to use, feedback and even contribute
### Fixed Bugs
* Multiphase job hangs when search space exhuasted (issue #1204)
* `nnictl` fails when log not available (issue #1548)
## Release 1.0 - 9/2/2019
### Major Features
* Tuners and Assessors
- Support Auto-Feature generator & selection -Issue#877 -PR #1387
+ Provide auto feature interface
+ Tuner based on beam search
+ [Add Pakdd example](./examples/trials/auto-feature-engineering/README.md)
- Add a parallel algorithm to improve the performance of TPE with large concurrency. -PR #1052
- Support multiphase for hyperband -PR #1257
* Training Service
- Support private docker registry -PR #755
* Engineering Improvements
- Python wrapper for rest api, support retrieve the values of the metrics in a programmatic way PR #1318
- New python API : get_experiment_id(), get_trial_id() -PR #1353 -Issue #1331 & -Issue#1368
- Optimized NAS Searchspace -PR #1393
+ Unify NAS search space with _type -- "mutable_type"e
+ Update random search tuner
- Set gpuNum as optional -Issue #1365
- Remove outputDir and dataDir configuration in PAI mode -Issue #1342
- When creating a trial in Kubeflow mode, codeDir will no longer be copied to logDir -Issue #1224
* Web Portal & User Experience
- Show the best metric curve during search progress in WebUI -Issue #1218
- Show the current number of parameters list in multiphase experiment -Issue1210 -PR #1348
- Add "Intermediate count" option in AddColumn. -Issue #1210
- Support search parameters value in WebUI -Issue #1208
- Enable automatic scaling of axes for metric value in default metric graph -Issue #1360
- Add a detailed documentation link to the nnictl command in the command prompt -Issue #1260
- UX improvement for showing Error log -Issue #1173
* Documentation
- Update the docs structure -Issue #1231
- [Multi phase document improvement](./docs/en_US/AdvancedFeature/MultiPhase.md) -Issue #1233 -PR #1242
+ Add configuration example
- [WebUI description improvement](./docs/en_US/Tutorial/WebUI.md) -PR #1419
### Bug fix
* (Bug fix)Fix the broken links in 0.9 release -Issue #1236
* (Bug fix)Script for auto-complete
* (Bug fix)Fix pipeline issue that it only check exit code of last command in a script. -PR #1417
* (Bug fix)quniform fors tuners -Issue #1377
* (Bug fix)'quniform' has different meaning beween GridSearch and other tuner. -Issue #1335
* (Bug fix)"nnictl experiment list" give the status of a "RUNNING" experiment as "INITIALIZED" -PR #1388
* (Bug fix)SMAC cannot be installed if nni is installed in dev mode -Issue #1376
* (Bug fix)The filter button of the intermediate result cannot be clicked -Issue #1263
* (Bug fix)API "/api/v1/nni/trial-jobs/xxx" doesn't show a trial's all parameters in multiphase experiment -Issue #1258
* (Bug fix)Succeeded trial doesn't have final result but webui show ×××(FINAL) -Issue #1207
* (Bug fix)IT for nnictl stop -Issue #1298
* (Bug fix)fix security warning
* (Bug fix)Hyper-parameter page broken -Issue #1332
* (Bug fix)Run flake8 tests to find Python syntax errors and undefined names -PR #1217
## Release 0.9 - 7/1/2019
......
......@@ -44,7 +44,7 @@ maxExecDuration: 10h
maxTrialNum: 100
#choice: local, remote, pai, kubeflow, frameworkcontroller
trainingServicePlatform: frameworkcontroller
searchSpacePath: ~/nni/examples/trials/mnist/search_space.json
searchSpacePath: ~/nni/examples/trials/mnist-tfv1/search_space.json
#choice: true, false
useAnnotation: false
tuner:
......@@ -59,7 +59,7 @@ assessor:
optimize_mode: maximize
gpuNum: 0
trial:
codeDir: ~/nni/examples/trials/mnist
codeDir: ~/nni/examples/trials/mnist-tfv1
taskRoles:
- name: worker
taskNum: 1
......
......@@ -84,7 +84,7 @@ kubeflowConfig:
## Run an experiment
Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of Kubeflow. The NNI config YAML file's content is like:
Use `examples/trials/mnist-tfv1` as an example. This is a tensorflow job, and use tf-operator of Kubeflow. The NNI config YAML file's content is like:
```yaml
authorName: default
......
**Tutorial: Create and Run an Experiment on local with NNI API**
===
In this tutorial, we will use the example in [~/examples/trials/mnist] to explain how to create and run an experiment on local with NNI API.
In this tutorial, we will use the example in [~/examples/trials/mnist-tfv1] to explain how to create and run an experiment on local with NNI API.
>Before starts
......
......@@ -127,6 +127,23 @@ In the YAML configure file, you need to set *useAnnotation* to true to enable NN
useAnnotation: true
```
## Standalone mode for debug
NNI supports standalone mode for trial code to run without starting an NNI experiment. This is for finding out bugs in trial code more conveniently. NNI annotation natively supports standalone mode, as the added NNI related lines are comments. For NNI trial APIs, the APIs have changed behaviors in standalone mode, some APIs return dummy values, and some APIs do not really report values. Please refer to the following table for the full list of these APIs.
```python
# NOTE: please assign default values to the hyperparameters in your trial code
nni.get_next_parameter # return {}
nni.report_final_result # have log printed on stdout, but does not report
nni.report_intermediate_result # have log printed on stdout, but does not report
nni.get_experiment_id # return "STANDALONE"
nni.get_trial_id # return "STANDALONE"
nni.get_sequence_id # return 0
```
You can try standalone mode with the [mnist example](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-tfv1). Simply run `python3 mnist.py` under the code directory. The trial code successfully runs with default hyperparameter values.
For more debuggability, please refer to [How to Debug](../Tutorial/HowToDebug.md)
## Where are my trials?
### Local Mode
......
......@@ -51,4 +51,4 @@ Our documentation is built with [sphinx](http://sphinx-doc.org/), supporting [Ma
* For links, please consider using __relative paths__ first. However, if the documentation is written in Markdown format, and:
* It's an image link which needs to be formatted with embedded html grammar, please use global URL like `https://user-images.githubusercontent.com/44491713/51381727-e3d0f780-1b4f-11e9-96ab-d26b9198ba65.png`, which can be automatically generated by dragging picture onto [Github Issue](https://github.com/Microsoft/nni/issues/new) Box.
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at `https://github.com/Microsoft/nni/tree/master/` ([mnist.py](https://github.com/Microsoft/nni/blob/master/examples/trials/mnist/mnist.py) for example).
* It cannot be re-formatted by sphinx, such as source code, please use its global URL. For source code that links to our github repo, please use URLs rooted at `https://github.com/Microsoft/nni/tree/master/` ([mnist.py](https://github.com/Microsoft/nni/blob/master/examples/trials/mnist-tfv1/mnist.py) for example).
......@@ -84,5 +84,6 @@ A common example of this would be run the mnist example without installing tenso
![](../../img/trial_error.jpg)
As it shows, every trial has a log path, where you can find trial'log and stderr.
As it shows, every trial has a log path, where you can find trial's log and stderr.
In addition to experiment level debug, NNI also provides the capability for debugging a single trial without the need to start the entire experiment. Refer to [standalone mode](../TrialExample/Trials.md#standalone-mode-for-debug) for more information about debug single trial code.
\ No newline at end of file
......@@ -9,7 +9,7 @@ Currently we support local, remote and pai mode on Windows. Windows 10.1809 is w
When these things are done, use the **config_windows.yml** configuration to start an experiment for validation.
```bash
nnictl create --config nni\examples\trials\mnist\config_windows.yml
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml
```
For other examples you need to change trial command `python3` into `python` in each example YAML.
......
......@@ -55,19 +55,19 @@ nnictl support commands:
> create a new experiment with the default port: 8080
```bash
nnictl create --config nni/examples/trials/mnist/config.yml
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml
```
> create a new experiment with specified port 8088
```bash
nnictl create --config nni/examples/trials/mnist/config.yml --port 8088
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml --port 8088
```
> create a new experiment with specified port 8088 and debug mode
```bash
nnictl create --config nni/examples/trials/mnist/config.yml --port 8088 --debug
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml --port 8088 --debug
```
Note:
......@@ -210,10 +210,10 @@ Debug mode will disable version check function in Trialkeeper.
* Example
`update experiment's new search space with file dir 'examples/trials/mnist/search_space.json'`
`update experiment's new search space with file dir 'examples/trials/mnist-tfv1/search_space.json'`
```bash
nnictl update searchspace [experiment_id] --filename examples/trials/mnist/search_space.json
nnictl update searchspace [experiment_id] --filename examples/trials/mnist-tfv1/search_space.json
```
* __nnictl update concurrency__
......
......@@ -47,7 +47,7 @@ if __name__ == '__main__':
run_trial(params)
```
Note: If you want to see the full implementation, please refer to [examples/trials/mnist/mnist_before.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist/mnist_before.py)
Note: If you want to see the full implementation, please refer to [examples/trials/mnist-tfv1/mnist_before.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist_before.py)
The above code can only try one set of parameters at a time, if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again.
......@@ -84,7 +84,7 @@ If you want to use NNI to automatically train your model and find the optimal hy
+ }
```
*Implemented code directory: [search_space.json](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist/search_space.json)*
*Implemented code directory: [search_space.json](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/search_space.json)*
**Step 2**: Modified your `Trial` file to get the hyperparameter set from NNI and report the final result to NNI.
......@@ -109,7 +109,7 @@ If you want to use NNI to automatically train your model and find the optimal hy
run_trial(params)
```
*Implemented code directory: [mnist.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist/mnist.py)*
*Implemented code directory: [mnist.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist.py)*
**Step 3**: Define a `config` file in YAML, which declare the `path` to search space and trial, also give `other information` such as tuning algorithm, max trial number and max duration arguments.
......@@ -134,15 +134,15 @@ trial:
Note, **for Windows, you need to change trial command `python3` to `python`**
*Implemented code directory: [config.yml](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist/config.yml)*
*Implemented code directory: [config.yml](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/config.yml)*
All the codes above are already prepared and stored in [examples/trials/mnist/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist).
All the codes above are already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1).
#### Linux and MacOS
Run the **config.yml** file from your command line to start MNIST experiment.
```bash
nnictl create --config nni/examples/trials/mnist/config.yml
nnictl create --config nni/examples/trials/mnist-tfv1/config.yml
```
#### Windows
Run the **config_windows.yml** file from your command line to start MNIST experiment.
......@@ -150,7 +150,7 @@ Run the **config_windows.yml** file from your command line to start MNIST experi
**Note**, if you're using NNI on Windows, it needs to change `python3` to `python` in the config.yml file, or use the config_windows.yml file to start the experiment.
```bash
nnictl create --config nni\examples\trials\mnist\config_windows.yml
nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml
```
Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl`
......
......@@ -52,7 +52,7 @@ Now, you can try to start an experiment to check if your environment is ready.
For example, run the command
```
nnictl create --config ~/nni/examples/trials/mnist/config.yml
nnictl create --config ~/nni/examples/trials/mnist-tfv1/config.yml
```
And open WebUI to check if everything is OK
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment