Unverified Commit eda48057 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

Change all master links to v1.9 (#3005)

parent a71cbe85
......@@ -10,7 +10,7 @@ Implementation on NNI is based on [official repo](https://github.com/megvii-mode
Here is a use case, which is the search space in paper, and the way to use flops limit to perform uniform sampling.
[Example code](https://github.com/microsoft/nni/tree/master/examples/nas/spos)
[Example code](https://github.com/microsoft/nni/tree/v1.9/examples/nas/spos)
### Requirements
......
......@@ -2,7 +2,7 @@
## DartsCell
DartsCell is extracted from [CNN model](./DARTS.md) designed [here](https://github.com/microsoft/nni/tree/master/examples/nas/darts). A DartsCell is a directed acyclic graph containing an ordered sequence of N nodes and each node stands for a latent representation (e.g. feature map in a convolutional network). Directed edges from Node 1 to Node 2 are associated with some operations that transform Node 1 and the result is stored on Node 2. The [Candidate operators](#predefined-operations-darts) between nodes is predefined and unchangeable. One edge represents an operation that chosen from the predefined ones to be applied to the starting node of the edge. One cell contains two input nodes, a single output node, and other `n_node` nodes. The input nodes are defined as the cell outputs in the previous two layers. The output of the cell is obtained by applying a reduction operation (e.g. concatenation) to all the intermediate nodes. To make the search space continuous, the categorical choice of a particular operation is relaxed to a softmax over all possible operations. By adjusting the weight of softmax on every node, the operation with the highest probability is chosen to be part of the final structure. A CNN model can be formed by stacking several cells together, which builds a search space. Note that, in DARTS paper all cells in the model share the same structure.
DartsCell is extracted from [CNN model](./DARTS.md) designed [here](https://github.com/microsoft/nni/tree/v1.9/examples/nas/darts). A DartsCell is a directed acyclic graph containing an ordered sequence of N nodes and each node stands for a latent representation (e.g. feature map in a convolutional network). Directed edges from Node 1 to Node 2 are associated with some operations that transform Node 1 and the result is stored on Node 2. The [Candidate operators](#predefined-operations-darts) between nodes is predefined and unchangeable. One edge represents an operation that chosen from the predefined ones to be applied to the starting node of the edge. One cell contains two input nodes, a single output node, and other `n_node` nodes. The input nodes are defined as the cell outputs in the previous two layers. The output of the cell is obtained by applying a reduction operation (e.g. concatenation) to all the intermediate nodes. To make the search space continuous, the categorical choice of a particular operation is relaxed to a softmax over all possible operations. By adjusting the weight of softmax on every node, the operation with the highest probability is chosen to be part of the final structure. A CNN model can be formed by stacking several cells together, which builds a search space. Note that, in DARTS paper all cells in the model share the same structure.
One structure in the Darts search space is shown below. Note that, NNI merges the last one of the four intermediate nodes and the output node.
......@@ -17,7 +17,7 @@ The predefined operators are shown [here](#predefined-operations-darts).
### Example code
[example code](https://github.com/microsoft/nni/tree/master/examples/nas/search_space_zoo/darts_example.py)
[example code](https://github.com/microsoft/nni/tree/v1.9/examples/nas/search_space_zoo/darts_example.py)
```bash
git clone https://github.com/Microsoft/nni.git
......@@ -61,7 +61,7 @@ All supported operators for Darts are listed below.
## ENASMicroLayer
This layer is extracted from the model designed [here](https://github.com/microsoft/nni/tree/master/examples/nas/enas). A model contains several blocks that share the same architecture. A block is made up of some normal layers and reduction layers, `ENASMicroLayer` is a unified implementation of the two types of layers. The only difference between the two layers is that reduction layers apply all operations with `stride=2`.
This layer is extracted from the model designed [here](https://github.com/microsoft/nni/tree/v1.9/examples/nas/enas). A model contains several blocks that share the same architecture. A block is made up of some normal layers and reduction layers, `ENASMicroLayer` is a unified implementation of the two types of layers. The only difference between the two layers is that reduction layers apply all operations with `stride=2`.
ENAS Micro employs a DAG with N nodes in one cell, where the nodes represent local computations, and the edges represent the flow of information between the N nodes. One cell contains two input nodes and a single output node. The following nodes choose two previous nodes as input and apply two operations from [predefined ones](#predefined-operations-enas) then add them as the output of this node. For example, Node 4 chooses Node 1 and Node 3 as inputs then applies `MaxPool` and `AvgPool` on the inputs respectively, then adds and sums them as the output of Node 4. Nodes that are not served as input for any other node are viewed as the output of the layer. If there are multiple output nodes, the model will calculate the average of these nodes as the layer output.
......@@ -80,7 +80,7 @@ The Reduction Layer is made up of two Conv operations followed by BatchNorm, eac
### Example code
[example code](https://github.com/microsoft/nni/tree/master/examples/nas/search_space_zoo/enas_micro_example.py)
[example code](https://github.com/microsoft/nni/tree/v1.9/examples/nas/search_space_zoo/enas_micro_example.py)
```bash
git clone https://github.com/Microsoft/nni.git
......@@ -136,7 +136,7 @@ To describe the whole search space, NNI provides a model, which is built by stac
### Example code
[example code](https://github.com/microsoft/nni/tree/master/examples/nas/search_space_zoo/enas_macro_example.py)
[example code](https://github.com/microsoft/nni/tree/v1.9/examples/nas/search_space_zoo/enas_macro_example.py)
```bash
git clone https://github.com/Microsoft/nni.git
......@@ -187,7 +187,7 @@ The search space of NAS Bench 201 is shown below.
### Example code
[example code](https://github.com/microsoft/nni/tree/master/examples/nas/search_space_zoo/nas_bench_201.py)
[example code](https://github.com/microsoft/nni/tree/v1.9/examples/nas/search_space_zoo/nas_bench_201.py)
```bash
# for structure searching
......
......@@ -45,7 +45,7 @@ The following link might be helpful for finding and downloading the correspondin
### Search Space
[Example code](https://github.com/microsoft/nni/tree/master/examples/nas/textnas)
[Example code](https://github.com/microsoft/nni/tree/v1.9/examples/nas/textnas)
```bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
......
......@@ -158,7 +158,7 @@ and [examples](https://github.com/microsoft/nni/blob/v1.7/docs/en_US/NAS/Benchma
* Improve PBT on failure handling and support experiment resume for PBT
#### NAS Updates
* NAS support for TensorFlow 2.0 (preview) [TF2.0 NAS examples](https://github.com/microsoft/nni/tree/master/examples/nas/naive-tf)
* NAS support for TensorFlow 2.0 (preview) [TF2.0 NAS examples](https://github.com/microsoft/nni/tree/v1.9/examples/nas/naive-tf)
* Use OrderedDict for LayerChoice
* Prettify the format of export
* Replace layer choice with selected module after applied fixed architecture
......@@ -168,7 +168,7 @@ and [examples](https://github.com/microsoft/nni/blob/v1.7/docs/en_US/NAS/Benchma
#### Training Service Updates
* update pai yaml merge logic
* support windows as remote machine in remote mode [Remote Mode](https://github.com/microsoft/nni/blob/master/docs/en_US/TrainingService/RemoteMachineMode.md#windows)
* support windows as remote machine in remote mode [Remote Mode](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/TrainingService/RemoteMachineMode.md#windows)
### Bug Fix
* fix dev install
......@@ -183,26 +183,26 @@ and [examples](https://github.com/microsoft/nni/blob/v1.7/docs/en_US/NAS/Benchma
#### Hyper-Parameter Optimizing
* New tuner: [Population Based Training (PBT)](https://github.com/microsoft/nni/blob/master/docs/en_US/Tuner/PBTTuner.md)
* New tuner: [Population Based Training (PBT)](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/Tuner/PBTTuner.md)
* Trials can now report infinity and NaN as result
#### Neural Architecture Search
* New NAS algorithm: [TextNAS](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/TextNAS.md)
* ENAS and DARTS now support [visualization](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Visualization.md) through web UI.
* New NAS algorithm: [TextNAS](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/NAS/TextNAS.md)
* ENAS and DARTS now support [visualization](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/NAS/Visualization.md) through web UI.
#### Model Compression
* New Pruner: [GradientRankFilterPruner](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/Pruner.md#gradientrankfilterpruner)
* New Pruner: [GradientRankFilterPruner](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/Compressor/Pruner.md#gradientrankfilterpruner)
* Compressors will validate configuration by default
* Refactor: Adding optimizer as an input argument of pruner, for easy support of DataParallel and more efficient iterative pruning. This is a broken change for the usage of iterative pruning algorithms.
* Model compression examples are refactored and improved
* Added documentation for [implementing compressing algorithm](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/Framework.md)
* Added documentation for [implementing compressing algorithm](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/Compressor/Framework.md)
#### Training Service
* Kubeflow now supports pytorchjob crd v1 (thanks external contributor @jiapinai)
* Experimental [DLTS](https://github.com/microsoft/nni/blob/master/docs/en_US/TrainingService/DLTSMode.md) support
* Experimental [DLTS](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/TrainingService/DLTSMode.md) support
#### Overall Documentation Improvement
......@@ -355,7 +355,7 @@ and [examples](https://github.com/microsoft/nni/blob/v1.7/docs/en_US/NAS/Benchma
- Support Auto-Feature generator & selection -Issue#877 -PR #1387
+ Provide auto feature interface
+ Tuner based on beam search
+ [Add Pakdd example](https://github.com/microsoft/nni/tree/master/examples/trials/auto-feature-engineering)
+ [Add Pakdd example](https://github.com/microsoft/nni/tree/v1.9/examples/trials/auto-feature-engineering)
- Add a parallel algorithm to improve the performance of TPE with large concurrency. -PR #1052
- Support multiphase for hyperband -PR #1257
......@@ -602,8 +602,8 @@ and [examples](https://github.com/microsoft/nni/blob/v1.7/docs/en_US/NAS/Benchma
### New examples
* [FashionMnist](https://github.com/microsoft/nni/tree/master/examples/trials/network_morphism), work together with network morphism tuner
* [Distributed MNIST example](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-distributed-pytorch) written in PyTorch
* [FashionMnist](https://github.com/microsoft/nni/tree/v1.9/examples/trials/network_morphism), work together with network morphism tuner
* [Distributed MNIST example](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-distributed-pytorch) written in PyTorch
## Release 0.4 - 12/6/2018
......@@ -611,7 +611,7 @@ and [examples](https://github.com/microsoft/nni/blob/v1.7/docs/en_US/NAS/Benchma
* [Kubeflow Training service](TrainingService/KubeflowMode.md)
* Support tf-operator
* [Distributed trial example](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow
* [Distributed trial example](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow
* [Grid search tuner](Tuner/GridsearchTuner.md)
* [Hyperband tuner](Tuner/HyperbandAdvisor.md)
* Support launch NNI experiment on MAC
......@@ -680,8 +680,8 @@ and [examples](https://github.com/microsoft/nni/blob/v1.7/docs/en_US/NAS/Benchma
docker pull msranni/nni:latest
```
* New trial example: [NNI Sklearn Example](https://github.com/microsoft/nni/tree/master/examples/trials/sklearn)
* New competition example: [Kaggle Competition TGS Salt Example](https://github.com/microsoft/nni/tree/master/examples/trials/kaggle-tgs-salt)
* New trial example: [NNI Sklearn Example](https://github.com/microsoft/nni/tree/v1.9/examples/trials/sklearn)
* New competition example: [Kaggle Competition TGS Salt Example](https://github.com/microsoft/nni/tree/v1.9/examples/trials/kaggle-tgs-salt)
### Others
......
......@@ -196,7 +196,7 @@ Trial configuration in kubeflow mode have the following configuration keys:
* gpuNum
* image
* Required key. In kubeflow mode, your trial program will be scheduled by Kubernetes to run in [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/). This key is used to specify the Docker image used to create the pod where your trail program will run.
* We already build a docker image [msranni/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
* We already build a docker image [msranni/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/v1.9/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
* privateRegistryAuthPath
* Optional field, specify `config.json` file path that holds an authorization token of docker registry, used to pull image from private registry. [Refer](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/).
* apiVersion
......
......@@ -12,7 +12,7 @@ If the computing resource customers try to use is not listed above, NNI provides
Training service needs to be chosen and configured properly in experiment configuration YAML file. Users could refer to the document of each training service for how to write the configuration. Also, [reference](../Tutorial/ExperimentConfig) provides more details on the specification of the experiment configuration file.
Next, users should prepare code directory, which is specified as `codeDir` in config file. Please note that in non-local mode, the code directory will be uploaded to remote or cluster before the experiment. Therefore, we limit the number of files to 2000 and total size to 300MB. If the code directory contains too many files, users can choose which files and subfolders should be excluded by adding a `.nniignore` file that works like a `.gitignore` file. For more details on how to write this file, see [this example](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/.nniignore) and the [git documentation](https://git-scm.com/docs/gitignore#_pattern_format).
Next, users should prepare code directory, which is specified as `codeDir` in config file. Please note that in non-local mode, the code directory will be uploaded to remote or cluster before the experiment. Therefore, we limit the number of files to 2000 and total size to 300MB. If the code directory contains too many files, users can choose which files and subfolders should be excluded by adding a `.nniignore` file that works like a `.gitignore` file. For more details on how to write this file, see [this example](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1/.nniignore) and the [git documentation](https://git-scm.com/docs/gitignore#_pattern_format).
In case users intend to use large files in their experiment (like large-scaled datasets) and they are not using local mode, they can either: 1) download the data before each trial launches by putting it into trial command; or 2) use a shared storage that is accessible to worker nodes. Usually, training platforms are equipped with shared storage, and NNI allows users to easily use them. Refer to docs of each built-in training service for details.
......
......@@ -104,7 +104,7 @@ Compared with [LocalMode](LocalMode.md) and [RemoteMachineMode](RemoteMachineMod
Optional key. In pai mode, your trial program will be scheduled by OpenPAI to run in [Docker container](https://www.docker.com/). This key is used to specify the Docker image used to create the container in which your trial will run.
We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it. If it is not set in trial configuration, it should be set in the config file specified in `paiConfigPath` field.
We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/v1.9/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it. If it is not set in trial configuration, it should be set in the config file specified in `paiConfigPath` field.
* virtualCluster
......
......@@ -50,7 +50,7 @@ Compared with [LocalMode](LocalMode.md) and [RemoteMachineMode](RemoteMachineMod
* Required key. Should be positive number based on your trial program's memory requirement
* image
* Required key. In paiYarn mode, your trial program will be scheduled by OpenpaiYarn to run in [Docker container](https://www.docker.com/). This key is used to specify the Docker image used to create the container in which your trial will run.
* We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
* We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/v1.9/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it.
* virtualCluster
* Optional key. Set the virtualCluster of OpenpaiYarn. If omitted, the job will run on default virtual cluster.
* shmMB
......
......@@ -73,12 +73,12 @@ We are ready for the experiment, let's now **run the config.yml file from your c
nnictl create --config nni/examples/trials/cifar10_pytorch/config.yml
```
[1]: https://github.com/Microsoft/nni/tree/master/examples/trials/cifar10_pytorch
[1]: https://github.com/Microsoft/nni/tree/v1.9/examples/trials/cifar10_pytorch
[2]: https://pytorch.org/
[3]: https://www.cs.toronto.edu/~kriz/cifar.html
[4]: https://github.com/Microsoft/nni/tree/master/examples/trials/cifar10_pytorch
[4]: https://github.com/Microsoft/nni/tree/v1.9/examples/trials/cifar10_pytorch
[5]: Trials.md
[6]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/config.yml
[7]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/config_pai.yml
[8]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/search_space.json
[9]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/main.py
[6]: https://github.com/Microsoft/nni/blob/v1.9/examples/trials/cifar10_pytorch/config.yml
[7]: https://github.com/Microsoft/nni/blob/v1.9/examples/trials/cifar10_pytorch/config_pai.yml
[8]: https://github.com/Microsoft/nni/blob/v1.9/examples/trials/cifar10_pytorch/search_space.json
[9]: https://github.com/Microsoft/nni/blob/v1.9/examples/trials/cifar10_pytorch/main.py
......@@ -6,7 +6,7 @@ Use Grid search to find the best combination of alpha, beta and gamma for Effici
## Instructions
[Example code](https://github.com/microsoft/nni/tree/master/examples/trials/efficientnet)
[Example code](https://github.com/microsoft/nni/tree/v1.9/examples/trials/efficientnet)
1. Set your working directory here in the example code directory.
2. Run `git clone https://github.com/ultmaster/EfficientNet-PyTorch` to clone this modified version of [EfficientNet-PyTorch](https://github.com/lukemelas/EfficientNet-PyTorch). The modifications were done to adhere to the original [Tensorflow version](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) as close as possible (including EMA, label smoothing and etc.); also added are the part which gets parameters from tuner and reports intermediate/final results. Clone it into `EfficientNet-PyTorch`; the files like `main.py`, `train_imagenet.sh` will appear inside, as specified in the configuration files.
......
......@@ -39,7 +39,7 @@ Reference link:
[lightgbm](https://lightgbm.readthedocs.io/en/latest/Parameters-Tuning.html) and [autoxgoboost](https://github.com/ja-thomas/autoxgboost/blob/master/poster_2018.pdf)
## 2. Task description
Now we come back to our example "auto-gbdt" which run in lightgbm and nni. The data including [train data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train) and [test data](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/data/regression.train).
Now we come back to our example "auto-gbdt" which run in lightgbm and nni. The data including [train data](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/auto-gbdt/data/regression.train) and [test data](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/auto-gbdt/data/regression.train).
Given the features and label in train data, we train a GBDT regression model and use it to predict.
## 3. How to run in nni
......@@ -95,7 +95,7 @@ if __name__ == '__main__':
```
### 3.3 Prepare your search space.
If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `bagging_freq`, you could write a [search_space.json](https://github.com/Microsoft/nni/blob/master/examples/trials/auto-gbdt/search_space.json) as follow:
If you like to tune `num_leaves`, `learning_rate`, `bagging_fraction` and `bagging_freq`, you could write a [search_space.json](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/auto-gbdt/search_space.json) as follow:
```json
{
......
......@@ -8,9 +8,9 @@ The performance of RocksDB is highly contingent on its tuning. However, because
This example illustrates how to use NNI to search the best configuration of RocksDB for a `fillrandom` benchmark supported by a benchmark tool `db_bench`, which is an official benchmark tool provided by RocksDB itself. Therefore, before running this example, please make sure NNI is installed and [`db_bench`](https://github.com/facebook/rocksdb/wiki/Benchmarking-tools) is in your `PATH`. Please refer to [here](../Tutorial/QuickStart.md) for detailed information about installation and preparing of NNI environment, and [here](https://github.com/facebook/rocksdb/blob/master/INSTALL.md) for compiling RocksDB as well as `db_bench`.
We also provide a simple script [`db_bench_installation.sh`](https://github.com/microsoft/nni/tree/master/examples/trials/systems/rocksdb-fillrandom/db_bench_installation.sh) helping to compile and install `db_bench` as well as its dependencies on Ubuntu. Installing RocksDB on other systems can follow the same procedure.
We also provide a simple script [`db_bench_installation.sh`](https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom/db_bench_installation.sh) helping to compile and install `db_bench` as well as its dependencies on Ubuntu. Installing RocksDB on other systems can follow the same procedure.
*code directory: [`example/trials/systems/rocksdb-fillrandom`](https://github.com/microsoft/nni/tree/master/examples/trials/systems/rocksdb-fillrandom)*
*code directory: [`example/trials/systems/rocksdb-fillrandom`](https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom)*
## Experiment setup
......@@ -39,7 +39,7 @@ In this example, the search space is specified by a `search_space.json` file as
}
```
*code directory: [`example/trials/systems/rocksdb-fillrandom/search_space.json`](https://github.com/microsoft/nni/tree/master/examples/trials/systems/rocksdb-fillrandom/search_space.json)*
*code directory: [`example/trials/systems/rocksdb-fillrandom/search_space.json`](https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom/search_space.json)*
### Benchmark code
......@@ -48,7 +48,7 @@ Benchmark code should receive a configuration from NNI manager, and report the c
* Use `nni.get_next_parameter()` to get next system configuration.
* Use `nni.report_final_result(metric)` to report the benchmark result.
*code directory: [`example/trials/systems/rocksdb-fillrandom/main.py`](https://github.com/microsoft/nni/tree/master/examples/trials/systems/rocksdb-fillrandom/main.py)*
*code directory: [`example/trials/systems/rocksdb-fillrandom/main.py`](https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom/main.py)*
### Config file
......@@ -56,11 +56,11 @@ One could start a NNI experiment with a config file. A config file for NNI is a
Here is an example of tuning RocksDB with SMAC algorithm:
*code directory: [`example/trials/systems/rocksdb-fillrandom/config_smac.yml`](https://github.com/microsoft/nni/tree/master/examples/trials/systems/rocksdb-fillrandom/config_smac.yml)*
*code directory: [`example/trials/systems/rocksdb-fillrandom/config_smac.yml`](https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom/config_smac.yml)*
Here is an example of tuning RocksDB with TPE algorithm:
*code directory: [`example/trials/systems/rocksdb-fillrandom/config_tpe.yml`](https://github.com/microsoft/nni/tree/master/examples/trials/systems/rocksdb-fillrandom/config_tpe.yml)*
*code directory: [`example/trials/systems/rocksdb-fillrandom/config_tpe.yml`](https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom/config_tpe.yml)*
Other tuners can be easily adopted in the same way. Please refer to [here](../Tuner/BuiltinTuner.md) for more information.
......@@ -87,7 +87,7 @@ We ran these two examples on the same machine with following details:
The detailed experiment results are shown in the below figure. Horizontal axis is sequential order of trials. Vertical axis is the metric, write OPS in this example. Blue dots represent trials for tuning RocksDB with SMAC tuner, and orange dots stand for trials for tuning RocksDB with TPE tuner.
![image](https://github.com/microsoft/nni/tree/master/examples/trials/systems/rocksdb-fillrandom/plot.png)
![image](https://github.com/microsoft/nni/tree/v1.9/examples/trials/systems/rocksdb-fillrandom/plot.png)
Following table lists the best trials and corresponding parameters and metric obtained by the two tuners. Unsurprisingly, both of them found the same optimal configuration for `fillrandom` benchmark.
......
......@@ -99,7 +99,7 @@ useAnnotation: false
#Your nni_manager ip
nniManagerIp: 10.10.10.10
tuner:
codeDir: https://github.com/Microsoft/nni/tree/master/examples/tuners/ga_customer_tuner
codeDir: https://github.com/Microsoft/nni/tree/v1.9/examples/tuners/ga_customer_tuner
classFileName: customer_tuner.py
className: CustomerTuner
classArgs:
......
......@@ -140,7 +140,7 @@ nni.get_trial_id # return "STANDALONE"
nni.get_sequence_id # return 0
```
You can try standalone mode with the [mnist example](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-tfv1). Simply run `python3 mnist.py` under the code directory. The trial code should successfully run with the default hyperparameter values.
You can try standalone mode with the [mnist example](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-tfv1). Simply run `python3 mnist.py` under the code directory. The trial code should successfully run with the default hyperparameter values.
For more information on debugging, please refer to [How to Debug](../Tutorial/HowToDebug.md)
......
......@@ -92,7 +92,7 @@ The advisor has a lot of different files, functions, and classes. Here, we will
### MNIST with BOHB
code implementation: [examples/trials/mnist-advisor](https://github.com/Microsoft/nni/tree/master/examples/trials/)
code implementation: [examples/trials/mnist-advisor](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/)
We chose BOHB to build a CNN on the MNIST dataset. The following is our experimental final results:
......
......@@ -270,11 +270,11 @@ advisor:
**Installation**
NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally) and [Keras](https://keras.io/#installation), so users should install them first. The corresponding requirements file is [here](https://github.com/microsoft/nni/blob/master/examples/trials/network_morphism/requirements.txt).
NetworkMorphism requires [PyTorch](https://pytorch.org/get-started/locally) and [Keras](https://keras.io/#installation), so users should install them first. The corresponding requirements file is [here](https://github.com/microsoft/nni/blob/v1.9/examples/trials/network_morphism/requirements.txt).
**Suggested scenario**
This is suggested when you want to apply deep learning methods to your task but you have no idea how to choose or design a network. You may modify this [example](https://github.com/Microsoft/nni/tree/master/examples/trials/network_morphism/cifar10/cifar10_keras.py) to fit your own dataset and your own data augmentation method. Also you can change the batch size, learning rate, or optimizer. Currently, this tuner only supports the computer vision domain. [Detailed Description](./NetworkmorphismTuner.md)
This is suggested when you want to apply deep learning methods to your task but you have no idea how to choose or design a network. You may modify this [example](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/network_morphism/cifar10/cifar10_keras.py) to fit your own dataset and your own data augmentation method. Also you can change the batch size, learning rate, or optimizer. Currently, this tuner only supports the computer vision domain. [Detailed Description](./NetworkmorphismTuner.md)
**classArgs Requirements:**
......@@ -310,7 +310,7 @@ Note that the only acceptable types of search space types are `quniform`, `unifo
**Suggested scenario**
Similar to TPE and SMAC, Metis is a black-box tuner. If your system takes a long time to finish each trial, Metis is more favorable than other approaches such as random search. Furthermore, Metis provides guidance on subsequent trials. Here is an [example](https://github.com/Microsoft/nni/tree/master/examples/trials/auto-gbdt/search_space_metis.json) on the use of Metis. Users only need to send the final result, such as `accuracy`, to the tuner by calling the NNI SDK. [Detailed Description](./MetisTuner.md)
Similar to TPE and SMAC, Metis is a black-box tuner. If your system takes a long time to finish each trial, Metis is more favorable than other approaches such as random search. Furthermore, Metis provides guidance on subsequent trials. Here is an [example](https://github.com/Microsoft/nni/tree/v1.9/examples/trials/auto-gbdt/search_space_metis.json) on the use of Metis. Users only need to send the final result, such as `accuracy`, to the tuner by calling the NNI SDK. [Detailed Description](./MetisTuner.md)
**classArgs Requirements:**
......@@ -425,7 +425,7 @@ Note that the only acceptable types within the search space are `layer_choice` a
**Suggested scenario**
PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the [mnist-nas](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-nas) example. [See details](./PPOTuner.md)
PPOTuner is a Reinforcement Learning tuner based on the PPO algorithm. PPOTuner can be used when using the NNI NAS interface to do neural architecture search. In general, the Reinforcement Learning algorithm needs more computing resources, though the PPO algorithm is relatively more efficient than others. It's recommended to use this tuner when you have a large amount of computional resources available. You could try it on a very simple task, such as the [mnist-nas](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-nas) example. [See details](./PPOTuner.md)
**classArgs Requirements:**
......@@ -485,6 +485,6 @@ Note that, to use this tuner, your trial code should be modified accordingly, pl
## **Reference and Feedback**
* To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub;
* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub;
* To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md);
* To know more about [NAS with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Overview.md);
* To know more about [Model Compression with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/Compression/Overview.md);
* To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/FeatureEngineering/Overview.md);
* To know more about [NAS with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/NAS/Overview.md);
* To know more about [Model Compression with NNI](https://github.com/microsoft/nni/blob/v1.9/docs/en_US/Compression/Overview.md);
......@@ -37,4 +37,4 @@ advisor:
## Example
Here we provide an [example](https://github.com/microsoft/nni/tree/master/examples/tuners/mnist_keras_customized_advisor).
Here we provide an [example](https://github.com/microsoft/nni/tree/v1.9/examples/tuners/mnist_keras_customized_advisor).
......@@ -113,10 +113,10 @@ tuner:
```
More detail example you could see:
> * [evolution-tuner](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](https://github.com/Microsoft/nni/tree/master/examples/tuners/ga_customer_tuner)
> * [evolution-tuner](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/evolution_tuner)
> * [hyperopt-tuner](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/hyperopt_tuner)
> * [evolution-based-customized-tuner](https://github.com/Microsoft/nni/tree/v1.9/examples/tuners/ga_customer_tuner)
### Write a more advanced automl algorithm
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor.
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/v1.9/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor.
......@@ -4,7 +4,7 @@
[Autokeras](https://arxiv.org/abs/1806.10282) is a popular autoML tool using Network Morphism. The basic idea of Autokeras is to use Bayesian Regression to estimate the metric of the Neural Network Architecture. Each time, it generates several child networks from father networks. Then it uses a naïve Bayesian regression to estimate its metric value from the history of trained results of network and metric value pairs. Next, it chooses the child which has the best, estimated performance and adds it to the training queue. Inspired by the work of Autokeras and referring to its [code](https://github.com/jhfjhfj1/autokeras), we implemented our Network Morphism method on the NNI platform.
If you want to know more about network morphism trial usage, please see the [Readme.md](https://github.com/Microsoft/nni/blob/master/examples/trials/network_morphism/README.md).
If you want to know more about network morphism trial usage, please see the [Readme.md](https://github.com/Microsoft/nni/blob/v1.9/examples/trials/network_morphism/README.md).
## 2. Usage
......
......@@ -31,7 +31,7 @@ save_path = os.path.join(params['save_checkpoint_dir'], 'model.pth')
...
```
The complete example code can be found [here](https://github.com/microsoft/nni/tree/master/examples/trials/mnist-pbt-tuner-pytorch).
The complete example code can be found [here](https://github.com/microsoft/nni/tree/v1.9/examples/trials/mnist-pbt-tuner-pytorch).
### Experiment config
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment