Unverified Commit b91aba39 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

NAS Benchmark (#2578)

* Adding NAS Benchmark (201)

* Add missing endline

* Update script

* Draft for NAS-Bench-101

* Update NAS-Bench-101

* Update constants

* Add API

* Update API

* Fix typo

* Draft for NDS

* Fix issues in storing loss

* Fix cell_spec problem

* Finalize NDS

* Update time consumption

* Add nds query function

* Update documentation for NAS-Bench-101

* Reformat generators

* Add NAS-Bench-201 docs

* Unite constant names

* Update docstring

* Update docstring

* Update rst

* Update scripts

* Add git as dependency

* Apt update

* Update installation scripts

* Fix dependency for pipeline

* Fix NDS script

* Fix NAS-Bench-201 installation

* Add example notebook

* Correct latency dimension

* shortcuts -> query

* Change run -> trial, ComputedStats -> TrialStats

* ipynb needs re-generation

* Fix NAS rst

* Fix documentation and pylint

* Fix pylint

* Add pandoc as dependency

* Update pandoc dependency

* Fix documentation broken link
parent 3fb49d9b
...@@ -26,11 +26,12 @@ jobs: ...@@ -26,11 +26,12 @@ jobs:
displayName: 'Run eslint' displayName: 'Run eslint'
- script: | - script: |
set -e set -e
sudo apt-get install -y pandoc
python3 -m pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html --user python3 -m pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html --user
python3 -m pip install tensorflow==2.2.0 --user python3 -m pip install tensorflow==2.2.0 --user
python3 -m pip install keras==2.4.2 --user python3 -m pip install keras==2.4.2 --user
python3 -m pip install gym onnx --user python3 -m pip install gym onnx peewee --user
python3 -m pip install sphinx==1.8.3 sphinx-argparse==0.2.5 sphinx-markdown-tables==0.0.9 sphinx-rtd-theme==0.4.2 sphinxcontrib-websupport==1.1.0 recommonmark==0.5.0 --user python3 -m pip install sphinx==1.8.3 sphinx-argparse==0.2.5 sphinx-markdown-tables==0.0.9 sphinx-rtd-theme==0.4.2 sphinxcontrib-websupport==1.1.0 recommonmark==0.5.0 nbsphinx --user
sudo apt-get install swig -y sudo apt-get install swig -y
nnictl package install --name=SMAC nnictl package install --name=SMAC
nnictl package install --name=BOHB nnictl package install --name=BOHB
...@@ -68,7 +69,7 @@ jobs: ...@@ -68,7 +69,7 @@ jobs:
python3 -m pip install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html --user python3 -m pip install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html --user
python3 -m pip install tensorflow==1.15.2 --user python3 -m pip install tensorflow==1.15.2 --user
python3 -m pip install keras==2.1.6 --user python3 -m pip install keras==2.1.6 --user
python3 -m pip install gym onnx --user python3 -m pip install gym onnx peewee --user
sudo apt-get install swig -y sudo apt-get install swig -y
nnictl package install --name=SMAC nnictl package install --name=SMAC
nnictl package install --name=BOHB nnictl package install --name=BOHB
......
# NAS Benchmarks (experimental)
```eval_rst
.. toctree::
:hidden:
Example Usages <BenchmarksExample>
```
## Prerequisites
* Please prepare a folder to household all the benchmark databases. By default, it can be found at `${HOME}/.nni/nasbenchmark`. You can place it anywhere you like, and specify it in `NASBENCHMARK_DIR` before importing NNI.
* Please install `peewee` via `pip install peewee`, which NNI uses to connect to database.
## Data Preparation
To avoid storage and legal issues, we do not provide any prepared databases. We strongly recommend users to use docker to run the generation scripts, to ease the burden of installing multiple dependencies. Please follow the following steps.
1. Clone NNI repo. Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.6`.
```bash
git clone -b ${NNI_VERSION} https://github.com/microsoft/nni
```
2. Run docker.
For NAS-Bench-101,
```bash
docker run -v ${HOME}/.nni/nasbenchmark:/outputs -v /path/to/your/nni:/nni tensorflow/tensorflow:1.15.2-py3 /bin/bash /nni/examples/nas/benchmarks/nasbench101.sh
```
For NAS-Bench-201,
```bash
docker run -v ${HOME}/.nni/nasbenchmark:/outputs -v /path/to/your/nni:/nni ufoym/deepo:pytorch-cpu /bin/bash /nni/examples/nas/benchmarks/nasbench201.sh
```
For NDS,
```bash
docker run -v ${HOME}/.nni/nasbenchmark:/outputs -v /path/to/your/nni:/nni python:3.7 /bin/bash /nni/examples/nas/benchmarks/nds.sh
```
Please make sure there is at least 10GB free disk space and note that the conversion process can take up to hours to complete.
## Example Usages
Please refer to [examples usages of Benchmarks API](./BenchmarksExample).
## NAS-Bench-101
[Paper link](https://arxiv.org/abs/1902.09635) &nbsp; &nbsp; [Open-source](https://github.com/google-research/nasbench)
NAS-Bench-101 contains 423,624 unique neural networks, combined with 4 variations in number of epochs (4, 12, 36, 108), each of which is trained 3 times. It is a cell-wise search space, which constructs and stacks a cell by enumerating DAGs with at most 7 operators, and no more than 9 connections. All operators can be chosen from `CONV3X3_BN_RELU`, `CONV1X1_BN_RELU` and `MAXPOOL3X3`, except the first operator (always `INPUT`) and last operator (always `OUTPUT`).
Notably, NAS-Bench-101 eliminates invalid cells (e.g., there is no path from input to output, or there is redundant computation). Furthermore, isomorphic cells are de-duplicated, i.e., all the remaining cells are computationally unique.
### API Documentation
```eval_rst
.. autofunction:: nni.nas.benchmarks.nasbench101.query_nb101_trial_stats
.. autoattribute:: nni.nas.benchmarks.nasbench101.INPUT
.. autoattribute:: nni.nas.benchmarks.nasbench101.OUTPUT
.. autoattribute:: nni.nas.benchmarks.nasbench101.CONV3X3_BN_RELU
.. autoattribute:: nni.nas.benchmarks.nasbench101.CONV1X1_BN_RELU
.. autoattribute:: nni.nas.benchmarks.nasbench101.MAXPOOL3X3
.. autoclass:: nni.nas.benchmarks.nasbench101.Nb101TrialConfig
.. autoclass:: nni.nas.benchmarks.nasbench101.Nb101TrialStats
.. autoclass:: nni.nas.benchmarks.nasbench101.Nb101IntermediateStats
.. autofunction:: nni.nas.benchmarks.nasbench101.graph_util.nasbench_format_to_architecture_repr
.. autofunction:: nni.nas.benchmarks.nasbench101.graph_util.infer_num_vertices
.. autofunction:: nni.nas.benchmarks.nasbench101.graph_util.hash_module
```
## NAS-Bench-201
[Paper link](https://arxiv.org/abs/2001.00326) &nbsp; &nbsp; [Open-source API](https://github.com/D-X-Y/NAS-Bench-201) &nbsp; &nbsp;[Implementations](https://github.com/D-X-Y/AutoDL-Projects)
NAS-Bench-201 is a cell-wise search space that views nodes as tensors and edges as operators. The search space contains all possible densely-connected DAGs with 4 nodes, resulting in 15,625 candidates in total. Each operator (i.e., edge) is selected from a pre-defined operator set (`NONE`, `SKIP_CONNECT`, `CONV_1X1`, `CONV_3X3` and `AVG_POOL_3X3`). Training appraoches vary in the dataset used (CIFAR-10, CIFAR-100, ImageNet) and number of epochs scheduled (12 and 200). Each combination of architecture and training approach is repeated 1 - 3 times with different random seeds.
### API Documentation
```eval_rst
.. autofunction:: nni.nas.benchmarks.nasbench201.query_nb201_trial_stats
.. autoattribute:: nni.nas.benchmarks.nasbench201.NONE
.. autoattribute:: nni.nas.benchmarks.nasbench201.SKIP_CONNECT
.. autoattribute:: nni.nas.benchmarks.nasbench201.CONV_1X1
.. autoattribute:: nni.nas.benchmarks.nasbench201.CONV_3X3
.. autoattribute:: nni.nas.benchmarks.nasbench201.AVG_POOL_3X3
.. autoclass:: nni.nas.benchmarks.nasbench201.Nb201TrialConfig
.. autoclass:: nni.nas.benchmarks.nasbench201.Nb201TrialStats
.. autoclass:: nni.nas.benchmarks.nasbench201.Nb201IntermediateStats
```
## NDS
[Paper link](https://arxiv.org/abs/1905.13214) &nbsp; &nbsp; [Open-source](https://github.com/facebookresearch/nds)
_On Network Design Spaces for Visual Recognition_ released trial statistics of over 100,000 configurations (models + hyper-parameters) sampled from multiple model families, including vanilla (feedforward network loosely inspired by VGG), ResNet and ResNeXt (residual basic block and residual bottleneck block) and NAS cells (following popular design from NASNet, Ameoba, PNAS, ENAS and DARTS). Most configurations are trained only once with a fixed seed, except a few that are trained twice or three times.
Instead of storing results obtained with different configurations in separate files, we dump them into one single database to enable comparison in multiple dimensions. Specifically, we use `model_family` to distinguish model types, `model_spec` for all hyper-parameters needed to build this model, `cell_spec` for detailed information on operators and connections if it is a NAS cell, `generator` to denote the sampling policy through which this configuration is generated. Refer to API documentation for details.
## Available Operators
Here is a list of available operators used in NDS.
```eval_rst
.. autoattribute:: nni.nas.benchmarks.nds.constants.NONE
.. autoattribute:: nni.nas.benchmarks.nds.constants.SKIP_CONNECT
.. autoattribute:: nni.nas.benchmarks.nds.constants.AVG_POOL_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.MAX_POOL_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.MAX_POOL_5X5
.. autoattribute:: nni.nas.benchmarks.nds.constants.MAX_POOL_7X7
.. autoattribute:: nni.nas.benchmarks.nds.constants.CONV_1X1
.. autoattribute:: nni.nas.benchmarks.nds.constants.CONV_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.CONV_3X1_1X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.CONV_7X1_1X7
.. autoattribute:: nni.nas.benchmarks.nds.constants.DIL_CONV_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.DIL_CONV_5X5
.. autoattribute:: nni.nas.benchmarks.nds.constants.SEP_CONV_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.SEP_CONV_5X5
.. autoattribute:: nni.nas.benchmarks.nds.constants.SEP_CONV_7X7
.. autoattribute:: nni.nas.benchmarks.nds.constants.DIL_SEP_CONV_3X3
```
### API Documentation
```eval_rst
.. autofunction:: nni.nas.benchmarks.nds.query_nds_trial_stats
.. autoclass:: nni.nas.benchmarks.nds.NdsTrialConfig
.. autoclass:: nni.nas.benchmarks.nds.NdsTrialStats
.. autoclass:: nni.nas.benchmarks.nds.NdsIntermediateStats
```
\ No newline at end of file
{
"nbformat": 4,
"nbformat_minor": 2,
"metadata": {
"language_info": {
"name": "python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"version": "3.6.10-final"
},
"orig_nbformat": 2,
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"npconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": 3,
"kernelspec": {
"name": "python361064bitnnilatestcondabff8d66a619a4d26af34fe0fe687c7b0",
"display_name": "Python 3.6.10 64-bit ('nnilatest': conda)"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example Usages of NAS Benchmarks"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import pprint\n",
"import time\n",
"\n",
"from nni.nas.benchmarks.nasbench101 import query_nb101_trial_stats\n",
"from nni.nas.benchmarks.nasbench201 import query_nb201_trial_stats\n",
"from nni.nas.benchmarks.nds import query_nds_trial_stats\n",
"\n",
"ti = time.time()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## NAS-Bench-101"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "{'config': {'arch': {'input1': [0],\n 'input2': [1],\n 'input3': [2],\n 'input4': [0],\n 'input5': [0, 3, 4],\n 'input6': [2, 5],\n 'op1': 'conv3x3-bn-relu',\n 'op2': 'maxpool3x3',\n 'op3': 'conv3x3-bn-relu',\n 'op4': 'conv3x3-bn-relu',\n 'op5': 'conv1x1-bn-relu'},\n 'hash': '00005c142e6f48ac74fdcf73e3439874',\n 'id': 4,\n 'num_epochs': 108,\n 'num_vertices': 7},\n 'id': 10,\n 'parameters': 8.55553,\n 'test_acc': 92.11738705635071,\n 'train_acc': 100.0,\n 'training_time': 106147.67578125,\n 'valid_acc': 92.41786599159241}\n{'config': {'arch': {'input1': [0],\n 'input2': [1],\n 'input3': [2],\n 'input4': [0],\n 'input5': [0, 3, 4],\n 'input6': [2, 5],\n 'op1': 'conv3x3-bn-relu',\n 'op2': 'maxpool3x3',\n 'op3': 'conv3x3-bn-relu',\n 'op4': 'conv3x3-bn-relu',\n 'op5': 'conv1x1-bn-relu'},\n 'hash': '00005c142e6f48ac74fdcf73e3439874',\n 'id': 4,\n 'num_epochs': 108,\n 'num_vertices': 7},\n 'id': 11,\n 'parameters': 8.55553,\n 'test_acc': 91.90705418586731,\n 'train_acc': 100.0,\n 'training_time': 106095.05859375,\n 'valid_acc': 92.45793223381042}\n{'config': {'arch': {'input1': [0],\n 'input2': [1],\n 'input3': [2],\n 'input4': [0],\n 'input5': [0, 3, 4],\n 'input6': [2, 5],\n 'op1': 'conv3x3-bn-relu',\n 'op2': 'maxpool3x3',\n 'op3': 'conv3x3-bn-relu',\n 'op4': 'conv3x3-bn-relu',\n 'op5': 'conv1x1-bn-relu'},\n 'hash': '00005c142e6f48ac74fdcf73e3439874',\n 'id': 4,\n 'num_epochs': 108,\n 'num_vertices': 7},\n 'id': 12,\n 'parameters': 8.55553,\n 'test_acc': 92.15745329856873,\n 'train_acc': 100.0,\n 'training_time': 106138.55712890625,\n 'valid_acc': 93.04887652397156}\n"
}
],
"source": [
"arch = {\n",
" 'op1': 'conv3x3-bn-relu',\n",
" 'op2': 'maxpool3x3',\n",
" 'op3': 'conv3x3-bn-relu',\n",
" 'op4': 'conv3x3-bn-relu',\n",
" 'op5': 'conv1x1-bn-relu',\n",
" 'input1': [0],\n",
" 'input2': [1],\n",
" 'input3': [2],\n",
" 'input4': [0],\n",
" 'input5': [0, 3, 4],\n",
" 'input6': [2, 5]\n",
"}\n",
"for t in query_nb101_trial_stats(arch, 108):\n",
" pprint.pprint(t)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## NAS-Bench-201"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "{'config': {'arch': {'0_1': 'avg_pool_3x3',\n '0_2': 'conv_1x1',\n '0_3': 'conv_1x1',\n '1_2': 'skip_connect',\n '1_3': 'skip_connect',\n '2_3': 'skip_connect'},\n 'dataset': 'cifar100',\n 'id': 7,\n 'num_cells': 5,\n 'num_channels': 16,\n 'num_epochs': 200},\n 'flops': 15.65322,\n 'id': 3,\n 'latency': 0.013182918230692545,\n 'ori_test_acc': 53.11,\n 'ori_test_evaluation_time': 1.0195916947864352,\n 'ori_test_loss': 1.7307863704681397,\n 'parameters': 0.135156,\n 'seed': 999,\n 'test_acc': 53.07999995727539,\n 'test_evaluation_time': 0.5097958473932176,\n 'test_loss': 1.731276072692871,\n 'train_acc': 57.82,\n 'train_loss': 1.5116578379058838,\n 'training_time': 2888.4371995925903,\n 'valid_acc': 53.14000000610351,\n 'valid_evaluation_time': 0.5097958473932176,\n 'valid_loss': 1.7302966793060304}\n{'config': {'arch': {'0_1': 'avg_pool_3x3',\n '0_2': 'conv_1x1',\n '0_3': 'conv_1x1',\n '1_2': 'skip_connect',\n '1_3': 'skip_connect',\n '2_3': 'skip_connect'},\n 'dataset': 'cifar100',\n 'id': 7,\n 'num_cells': 5,\n 'num_channels': 16,\n 'num_epochs': 200},\n 'flops': 15.65322,\n 'id': 7,\n 'latency': 0.013182918230692545,\n 'ori_test_acc': 51.93,\n 'ori_test_evaluation_time': 1.0195916947864352,\n 'ori_test_loss': 1.7572312774658203,\n 'parameters': 0.135156,\n 'seed': 777,\n 'test_acc': 51.979999938964845,\n 'test_evaluation_time': 0.5097958473932176,\n 'test_loss': 1.7429540189743042,\n 'train_acc': 57.578,\n 'train_loss': 1.5114233912658692,\n 'training_time': 2888.4371995925903,\n 'valid_acc': 51.88,\n 'valid_evaluation_time': 0.5097958473932176,\n 'valid_loss': 1.7715086591720581}\n{'config': {'arch': {'0_1': 'avg_pool_3x3',\n '0_2': 'conv_1x1',\n '0_3': 'conv_1x1',\n '1_2': 'skip_connect',\n '1_3': 'skip_connect',\n '2_3': 'skip_connect'},\n 'dataset': 'cifar100',\n 'id': 7,\n 'num_cells': 5,\n 'num_channels': 16,\n 'num_epochs': 200},\n 'flops': 15.65322,\n 'id': 11,\n 'latency': 0.013182918230692545,\n 'ori_test_acc': 53.38,\n 'ori_test_evaluation_time': 1.0195916947864352,\n 'ori_test_loss': 1.7281623031616211,\n 'parameters': 0.135156,\n 'seed': 888,\n 'test_acc': 53.67999998779297,\n 'test_evaluation_time': 0.5097958473932176,\n 'test_loss': 1.7327697801589965,\n 'train_acc': 57.792,\n 'train_loss': 1.5091403088760376,\n 'training_time': 2888.4371995925903,\n 'valid_acc': 53.08000000610352,\n 'valid_evaluation_time': 0.5097958473932176,\n 'valid_loss': 1.7235548280715942}\n"
}
],
"source": [
"arch = {\n",
" '0_1': 'avg_pool_3x3',\n",
" '0_2': 'conv_1x1',\n",
" '1_2': 'skip_connect',\n",
" '0_3': 'conv_1x1',\n",
" '1_3': 'skip_connect',\n",
" '2_3': 'skip_connect'\n",
"}\n",
"for t in query_nb201_trial_stats(arch, 200, 'cifar100'):\n",
" pprint.pprint(t)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## NDS"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "{'best_test_acc': 90.48,\n 'best_train_acc': 96.356,\n 'best_train_loss': 0.116,\n 'config': {'base_lr': 0.1,\n 'cell_spec': {},\n 'dataset': 'cifar10',\n 'generator': 'random',\n 'id': 45505,\n 'model_family': 'residual_bottleneck',\n 'model_spec': {'bot_muls': [0.0, 0.25, 0.25, 0.25],\n 'ds': [1, 16, 1, 4],\n 'num_gs': [1, 2, 1, 2],\n 'ss': [1, 1, 2, 2],\n 'ws': [16, 64, 128, 16]},\n 'num_epochs': 100,\n 'proposer': 'resnext-a',\n 'weight_decay': 0.0005},\n 'final_test_acc': 90.39,\n 'final_train_acc': 96.298,\n 'final_train_loss': 0.116,\n 'flops': 69.890986,\n 'id': 45505,\n 'iter_time': 0.065,\n 'parameters': 0.083002,\n 'seed': 1}\n"
}
],
"source": [
"model_spec = {\n",
" 'bot_muls': [0.0, 0.25, 0.25, 0.25],\n",
" 'ds': [1, 16, 1, 4],\n",
" 'num_gs': [1, 2, 1, 2],\n",
" 'ss': [1, 1, 2, 2],\n",
" 'ws': [16, 64, 128, 16]\n",
"}\n",
"# Use none as a wildcard\n",
"for t in query_nds_trial_stats('residual_bottleneck', None, None, model_spec, None, 'cifar10'):\n",
" pprint.pprint(t)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "{'best_test_acc': 93.58,\n 'best_train_acc': 99.772,\n 'best_train_loss': 0.011,\n 'config': {'base_lr': 0.1,\n 'cell_spec': {},\n 'dataset': 'cifar10',\n 'generator': 'random',\n 'id': 108998,\n 'model_family': 'residual_basic',\n 'model_spec': {'ds': [1, 12, 12, 12],\n 'ss': [1, 1, 2, 2],\n 'ws': [16, 24, 24, 40]},\n 'num_epochs': 100,\n 'proposer': 'resnet',\n 'weight_decay': 0.0005},\n 'final_test_acc': 93.49,\n 'final_train_acc': 99.772,\n 'final_train_loss': 0.011,\n 'flops': 184.519578,\n 'id': 108998,\n 'iter_time': 0.059,\n 'parameters': 0.594138,\n 'seed': 1}\n"
}
],
"source": [
"model_spec = {'ds': [1, 12, 12, 12], 'ss': [1, 1, 2, 2], 'ws': [16, 24, 24, 40]}\n",
"for t in query_nds_trial_stats('residual_basic', 'resnet', 'random', model_spec, {}, 'cifar10'):\n",
" pprint.pprint(t)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "{'best_test_acc': 84.5,\n 'best_train_acc': 89.66499999999999,\n 'best_train_loss': 0.302,\n 'config': {'base_lr': 0.1,\n 'cell_spec': {},\n 'dataset': 'cifar10',\n 'generator': 'random',\n 'id': 139492,\n 'model_family': 'vanilla',\n 'model_spec': {'ds': [1, 12, 12, 12],\n 'ss': [1, 1, 2, 2],\n 'ws': [16, 24, 32, 40]},\n 'num_epochs': 100,\n 'proposer': 'vanilla',\n 'weight_decay': 0.0005},\n 'final_test_acc': 84.35,\n 'final_train_acc': 89.633,\n 'final_train_loss': 0.303,\n 'flops': 208.36393,\n 'id': 154692,\n 'iter_time': 0.058,\n 'parameters': 0.68977,\n 'seed': 1}\n"
}
],
"source": [
"# get the first one\n",
"pprint.pprint(next(query_nds_trial_stats('vanilla', None, None, None, None, None)))"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "{'best_test_acc': 93.37,\n 'best_train_acc': 99.91,\n 'best_train_loss': 0.006,\n 'config': {'base_lr': 0.1,\n 'cell_spec': {'normal_0_input_x': 0,\n 'normal_0_input_y': 1,\n 'normal_0_op_x': 'avg_pool_3x3',\n 'normal_0_op_y': 'conv_7x1_1x7',\n 'normal_1_input_x': 2,\n 'normal_1_input_y': 0,\n 'normal_1_op_x': 'sep_conv_3x3',\n 'normal_1_op_y': 'sep_conv_5x5',\n 'normal_2_input_x': 2,\n 'normal_2_input_y': 2,\n 'normal_2_op_x': 'dil_sep_conv_3x3',\n 'normal_2_op_y': 'dil_sep_conv_3x3',\n 'normal_3_input_x': 4,\n 'normal_3_input_y': 4,\n 'normal_3_op_x': 'skip_connect',\n 'normal_3_op_y': 'dil_sep_conv_3x3',\n 'normal_4_input_x': 2,\n 'normal_4_input_y': 4,\n 'normal_4_op_x': 'conv_7x1_1x7',\n 'normal_4_op_y': 'sep_conv_3x3',\n 'normal_concat': [3, 5, 6],\n 'reduce_0_input_x': 0,\n 'reduce_0_input_y': 1,\n 'reduce_0_op_x': 'avg_pool_3x3',\n 'reduce_0_op_y': 'dil_sep_conv_3x3',\n 'reduce_1_input_x': 0,\n 'reduce_1_input_y': 0,\n 'reduce_1_op_x': 'sep_conv_3x3',\n 'reduce_1_op_y': 'sep_conv_3x3',\n 'reduce_2_input_x': 2,\n 'reduce_2_input_y': 0,\n 'reduce_2_op_x': 'skip_connect',\n 'reduce_2_op_y': 'sep_conv_7x7',\n 'reduce_3_input_x': 4,\n 'reduce_3_input_y': 4,\n 'reduce_3_op_x': 'conv_7x1_1x7',\n 'reduce_3_op_y': 'skip_connect',\n 'reduce_4_input_x': 0,\n 'reduce_4_input_y': 5,\n 'reduce_4_op_x': 'conv_7x1_1x7',\n 'reduce_4_op_y': 'conv_7x1_1x7',\n 'reduce_concat': [3, 6]},\n 'dataset': 'cifar10',\n 'generator': 'random',\n 'id': 1,\n 'model_family': 'nas_cell',\n 'model_spec': {'aux': False,\n 'depth': 12,\n 'drop_prob': 0.0,\n 'num_nodes_normal': 5,\n 'num_nodes_reduce': 5,\n 'width': 32},\n 'num_epochs': 100,\n 'proposer': 'amoeba',\n 'weight_decay': 0.0005},\n 'final_test_acc': 93.27,\n 'final_train_acc': 99.91,\n 'final_train_loss': 0.006,\n 'flops': 664.400586,\n 'id': 1,\n 'iter_time': 0.281,\n 'parameters': 4.190314,\n 'seed': 1}\n"
}
],
"source": [
"# count number\n",
"model_spec = {'num_nodes_normal': 5, 'num_nodes_reduce': 5, 'depth': 12, 'width': 32, 'aux': False, 'drop_prob': 0.0}\n",
"cell_spec = {\n",
" 'normal_0_op_x': 'avg_pool_3x3',\n",
" 'normal_0_input_x': 0,\n",
" 'normal_0_op_y': 'conv_7x1_1x7',\n",
" 'normal_0_input_y': 1,\n",
" 'normal_1_op_x': 'sep_conv_3x3',\n",
" 'normal_1_input_x': 2,\n",
" 'normal_1_op_y': 'sep_conv_5x5',\n",
" 'normal_1_input_y': 0,\n",
" 'normal_2_op_x': 'dil_sep_conv_3x3',\n",
" 'normal_2_input_x': 2,\n",
" 'normal_2_op_y': 'dil_sep_conv_3x3',\n",
" 'normal_2_input_y': 2,\n",
" 'normal_3_op_x': 'skip_connect',\n",
" 'normal_3_input_x': 4,\n",
" 'normal_3_op_y': 'dil_sep_conv_3x3',\n",
" 'normal_3_input_y': 4,\n",
" 'normal_4_op_x': 'conv_7x1_1x7',\n",
" 'normal_4_input_x': 2,\n",
" 'normal_4_op_y': 'sep_conv_3x3',\n",
" 'normal_4_input_y': 4,\n",
" 'normal_concat': [3, 5, 6],\n",
" 'reduce_0_op_x': 'avg_pool_3x3',\n",
" 'reduce_0_input_x': 0,\n",
" 'reduce_0_op_y': 'dil_sep_conv_3x3',\n",
" 'reduce_0_input_y': 1,\n",
" 'reduce_1_op_x': 'sep_conv_3x3',\n",
" 'reduce_1_input_x': 0,\n",
" 'reduce_1_op_y': 'sep_conv_3x3',\n",
" 'reduce_1_input_y': 0,\n",
" 'reduce_2_op_x': 'skip_connect',\n",
" 'reduce_2_input_x': 2,\n",
" 'reduce_2_op_y': 'sep_conv_7x7',\n",
" 'reduce_2_input_y': 0,\n",
" 'reduce_3_op_x': 'conv_7x1_1x7',\n",
" 'reduce_3_input_x': 4,\n",
" 'reduce_3_op_y': 'skip_connect',\n",
" 'reduce_3_input_y': 4,\n",
" 'reduce_4_op_x': 'conv_7x1_1x7',\n",
" 'reduce_4_input_x': 0,\n",
" 'reduce_4_op_y': 'conv_7x1_1x7',\n",
" 'reduce_4_input_y': 5,\n",
" 'reduce_concat': [3, 6]\n",
"}\n",
"\n",
"for t in query_nds_trial_stats('nas_cell', None, None, model_spec, cell_spec, 'cifar10'):\n",
" assert t['config']['model_spec'] == model_spec\n",
" assert t['config']['cell_spec'] == cell_spec\n",
" pprint.pprint(t)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "NDS (amoeba) count: 5107\n"
}
],
"source": [
"# count number\n",
"print('NDS (amoeba) count:', len(list(query_nds_trial_stats(None, 'amoeba', None, None, None, None, None))))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "Elapsed time: 1.9107539653778076 seconds\n"
}
],
"source": [
"print('Elapsed time: ', time.time() - ti, 'seconds')"
]
}
]
}
\ No newline at end of file
...@@ -105,7 +105,7 @@ ...@@ -105,7 +105,7 @@
* WebUI refactor: adopt fabric framework * WebUI refactor: adopt fabric framework
#### Others #### Others
* Support running [NNI experiment at foreground](https://github.com/microsoft/nni/blob/v1.4/docs/en_US/Tutorial/Nnictl.md#manage-an-experiment), i.e., `--foreground` argument in `nnictl create/resume/view` * Support running [NNI experiment at foreground](https://github.com/microsoft/nni/blob/v1.4/docs/en_US/Tutorial/Nnictl#manage-an-experiment), i.e., `--foreground` argument in `nnictl create/resume/view`
* Support canceling the trials in UNKNOWN state * Support canceling the trials in UNKNOWN state
* Support large search space whose size could be up to 50mb (thanks external contributor @Sundrops) * Support large search space whose size could be up to 50mb (thanks external contributor @Sundrops)
...@@ -338,7 +338,7 @@ ...@@ -338,7 +338,7 @@
* NNI running on windows for local mode * NNI running on windows for local mode
* [New advisor: BOHB](Tuner/BohbAdvisor.md) * [New advisor: BOHB](Tuner/BohbAdvisor.md)
* Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband * Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband
* [Support import and export experiment data through nnictl](Tutorial/Nnictl.md#experiment) * [Support import and export experiment data through nnictl](Tutorial/Nnictl.md)
* Generate analysis results report after the experiment execution * Generate analysis results report after the experiment execution
* Support import data to tuner and advisor for tuning * Support import data to tuner and advisor for tuning
* [Designated gpu devices for NNI trial jobs](Tutorial/ExperimentConfig.md#localConfig) * [Designated gpu devices for NNI trial jobs](Tutorial/ExperimentConfig.md#localConfig)
......
...@@ -86,4 +86,4 @@ A common example of this would be run the mnist example without installing tenso ...@@ -86,4 +86,4 @@ A common example of this would be run the mnist example without installing tenso
As it shows, every trial has a log path, where you can find trial's log and stderr. As it shows, every trial has a log path, where you can find trial's log and stderr.
In addition to experiment level debug, NNI also provides the capability for debugging a single trial without the need to start the entire experiment. Refer to [standalone mode](../TrialExample/Trials.md#standalone-mode-for-debug) for more information about debug single trial code. In addition to experiment level debug, NNI also provides the capability for debugging a single trial without the need to start the entire experiment. Refer to [standalone mode](../TrialExample/Trials#standalone-mode-for-debugging) for more information about debug single trial code.
\ No newline at end of file \ No newline at end of file
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
## Overview ## Overview
NNI provides a lot of [builtin tuners](../Tuner/BuiltinTuner.md), [advisors](../Tuner/BuiltinTuner.md#Hyperband) and [assessors](../Assessor/BuiltinAssessor.md) can be used directly for Hyper Parameter Optimization, and some extra algorithms can be installed via `nnictl package install --name <name>` after NNI is installed. You can check these extra algorithms via `nnictl package list` command. NNI provides a lot of [builtin tuners](../Tuner/BuiltinTuner.md), [advisors](../Tuner/HyperbandAdvisor.md) and [assessors](../Assessor/BuiltinAssessor.md) can be used directly for Hyper Parameter Optimization, and some extra algorithms can be installed via `nnictl package install --name <name>` after NNI is installed. You can check these extra algorithms via `nnictl package list` command.
NNI also provides the ability to build your own customized tuners, advisors and assessors. To use the customized algorithm, users can simply follow the spec in experiment config file to properly reference the algorithm, which has been illustrated in the tutorials of [customized tuners](../Tuner/CustomizeTuner.md)/[advisors](../Tuner/CustomizeAdvisor.md)/[assessors](../Assessor/CustomizeAssessor.md). NNI also provides the ability to build your own customized tuners, advisors and assessors. To use the customized algorithm, users can simply follow the spec in experiment config file to properly reference the algorithm, which has been illustrated in the tutorials of [customized tuners](../Tuner/CustomizeTuner.md)/[advisors](../Tuner/CustomizeAdvisor.md)/[assessors](../Assessor/CustomizeAssessor.md).
......
...@@ -46,6 +46,7 @@ extensions = [ ...@@ -46,6 +46,7 @@ extensions = [
'sphinxarg.ext', 'sphinxarg.ext',
'sphinx.ext.napoleon', 'sphinx.ext.napoleon',
'sphinx.ext.viewcode', 'sphinx.ext.viewcode',
'nbsphinx',
] ]
# Add mock modules # Add mock modules
......
...@@ -23,4 +23,5 @@ For details, please refer to the following tutorials: ...@@ -23,4 +23,5 @@ For details, please refer to the following tutorials:
One-shot NAS <NAS/one_shot_nas> One-shot NAS <NAS/one_shot_nas>
Customize a NAS Algorithm <NAS/Advanced> Customize a NAS Algorithm <NAS/Advanced>
NAS Visualization <NAS/Visualization> NAS Visualization <NAS/Visualization>
NAS Benchmarks <NAS/Benchmarks>
API Reference <NAS/NasReference> API Reference <NAS/NasReference>
...@@ -9,6 +9,8 @@ json_tricks ...@@ -9,6 +9,8 @@ json_tricks
numpy numpy
scipy scipy
coverage coverage
peewee
nbsphinx
schema schema
tensorboard tensorboard
scikit-learn==0.20 scikit-learn==0.20
......
set -e
mkdir -p /outputs /tmp
echo "Installing dependencies..."
apt update && apt install -y wget git
pip install --no-cache-dir tqdm peewee
echo "Installing NNI..."
cd /nni && echo "y" | source install.sh
cd /tmp
echo "Installing NASBench..."
git clone https://github.com/google-research/nasbench
cd nasbench && pip install -e . && cd ..
echo "Downloading NAS-Bench-101..."
wget https://storage.googleapis.com/nasbench/nasbench_full.tfrecord
echo "Generating database..."
rm -f /outputs/nasbench101.db /outputs/nasbench101.db-journal
NASBENCHMARK_DIR=/outputs python -m nni.nas.benchmarks.nasbench101.db_gen nasbench_full.tfrecord
set -e
mkdir -p /outputs /tmp
echo "Installing dependencies..."
apt update && apt install -y wget
pip uninstall -y enum34 # https://github.com/iterative/dvc/issues/1995
pip install --no-cache-dir gdown tqdm peewee
echo "Installing NNI..."
cd /nni && echo "y" | source install.sh
cd /tmp
echo "Downloading NAS-Bench-201..."
gdown https://drive.google.com/uc\?id\=1OOfVPpt-lA4u2HJrXbgrRd42IbfvJMyE -O a.pth
echo "Generating database..."
rm -f /outputs/nasbench201.db /outputs/nasbench201.db-journal
NASBENCHMARK_DIR=/outputs python -m nni.nas.benchmarks.nasbench201.db_gen a.pth
set -e
mkdir -p /outputs /tmp
echo "Installing dependencies..."
apt update && apt install -y wget zip
pip install --no-cache-dir tqdm peewee
echo "Installing NNI..."
cd /nni && echo "y" | source install.sh
cd /tmp
echo "Downloading NDS..."
wget https://dl.fbaipublicfiles.com/nds/data.zip -O data.zip
unzip data.zip
echo "Generating database..."
rm -f /outputs/nds.db /outputs/nds.db-journal
NASBENCHMARK_DIR=/outputs python -m nni.nas.benchmarks.nds.db_gen nds_data
import os
DATABASE_DIR = os.environ.get("NASBENCHMARK_DIR", os.path.expanduser("~/.nni/nasbenchmark"))
from .constants import INPUT, OUTPUT, CONV3X3_BN_RELU, CONV1X1_BN_RELU, MAXPOOL3X3
from .model import Nb101TrialStats, Nb101IntermediateStats, Nb101TrialConfig
from .query import query_nb101_trial_stats
INPUT = 'input'
OUTPUT = 'output'
CONV3X3_BN_RELU = 'conv3x3-bn-relu'
CONV1X1_BN_RELU = 'conv1x1-bn-relu'
MAXPOOL3X3 = 'maxpool3x3'
LABEL2ID = {
INPUT: -1,
OUTPUT: -2,
CONV3X3_BN_RELU: 0,
CONV1X1_BN_RELU: 1,
MAXPOOL3X3: 2
}
import argparse
from tqdm import tqdm
from nasbench import api # pylint: disable=import-error
from .model import db, Nb101TrialConfig, Nb101TrialStats, Nb101IntermediateStats
from .graph_util import nasbench_format_to_architecture_repr, hash_module
def main():
parser = argparse.ArgumentParser()
parser.add_argument('input_file',
help='Path to the file to be converted, e.g., nasbench_full.tfrecord')
args = parser.parse_args()
nasbench = api.NASBench(args.input_file)
with db:
db.create_tables([Nb101TrialConfig, Nb101TrialStats, Nb101IntermediateStats])
for hashval in tqdm(nasbench.hash_iterator(), desc='Dumping data into database'):
metadata, metrics = nasbench.get_metrics_from_hash(hashval)
num_vertices, architecture = nasbench_format_to_architecture_repr(
metadata['module_adjacency'], metadata['module_operations'])
assert hashval == hash_module(architecture, num_vertices)
for epochs in [4, 12, 36, 108]:
trial_config = Nb101TrialConfig.create(
arch=architecture,
num_vertices=num_vertices,
hash=hashval,
num_epochs=epochs
)
for seed in range(3):
cur = metrics[epochs][seed]
trial = Nb101TrialStats.create(
config=trial_config,
train_acc=cur['final_train_accuracy'] * 100,
valid_acc=cur['final_validation_accuracy'] * 100,
test_acc=cur['final_test_accuracy'] * 100,
parameters=metadata['trainable_parameters'] / 1e6,
training_time=cur['final_training_time'] * 60
)
for t in ['halfway', 'final']:
Nb101IntermediateStats.create(
trial=trial,
current_epoch=epochs // 2 if t == 'halfway' else epochs,
training_time=cur[t + '_training_time'],
train_acc=cur[t + '_train_accuracy'] * 100,
valid_acc=cur[t + '_validation_accuracy'] * 100,
test_acc=cur[t + '_test_accuracy'] * 100
)
if __name__ == '__main__':
main()
import hashlib
import numpy as np
from .constants import INPUT, LABEL2ID, OUTPUT
def _labeling_from_architecture(architecture, vertices):
return [INPUT] + [architecture['op{}'.format(i)] for i in range(1, vertices - 1)] + [OUTPUT]
def _adjancency_matrix_from_architecture(architecture, vertices):
matrix = np.zeros((vertices, vertices), dtype=np.bool)
for i in range(1, vertices):
for k in architecture['input{}'.format(i)]:
matrix[k, i] = 1
return matrix
def nasbench_format_to_architecture_repr(adjacency_matrix, labeling):
"""
Computes a graph-invariance MD5 hash of the matrix and label pair.
Imported from NAS-Bench-101 repo.
Parameters
----------
adjacency_matrix : np.ndarray
A 2D array of shape NxN, where N is the number of vertices.
``matrix[u][v]`` is 1 if there is a direct edge from `u` to `v`,
otherwise it will be 0.
labeling : list of str
A list of str that starts with input and ends with output. The intermediate
nodes are chosen from candidate operators.
Returns
-------
tuple and int and dict
Converted number of vertices and architecture.
"""
num_vertices = adjacency_matrix.shape[0]
assert len(labeling) == num_vertices
architecture = {}
for i in range(1, num_vertices - 1):
architecture['op{}'.format(i)] = labeling[i]
assert labeling[i] not in [INPUT, OUTPUT]
for i in range(1, num_vertices):
architecture['input{}'.format(i)] = [k for k in range(i) if adjacency_matrix[k, i]]
return num_vertices, architecture
def infer_num_vertices(architecture):
"""
Infer number of vertices from an architecture dict.
Parameters
----------
architecture : dict
Architecture in NNI format.
Returns
-------
int
Number of vertices.
"""
op_keys = set([k for k in architecture.keys() if k.startswith('op')])
intermediate_vertices = len(op_keys)
assert op_keys == {'op{}'.format(i) for i in range(1, intermediate_vertices + 1)}
return intermediate_vertices + 2
def hash_module(architecture, vertices):
"""
Computes a graph-invariance MD5 hash of the matrix and label pair.
This snippet is modified from code in NAS-Bench-101 repo.
Parameters
----------
matrix : np.ndarray
Square upper-triangular adjacency matrix.
labeling : list of int
Labels of length equal to both dimensions of matrix.
Returns
-------
str
MD5 hash of the matrix and labeling.
"""
labeling = _labeling_from_architecture(architecture, vertices)
labeling = [LABEL2ID[t] for t in labeling]
matrix = _adjancency_matrix_from_architecture(architecture, vertices)
in_edges = np.sum(matrix, axis=0).tolist()
out_edges = np.sum(matrix, axis=1).tolist()
assert len(in_edges) == len(out_edges) == len(labeling)
hashes = list(zip(out_edges, in_edges, labeling))
hashes = [hashlib.md5(str(h).encode('utf-8')).hexdigest() for h in hashes]
# Computing this up to the diameter is probably sufficient but since the
# operation is fast, it is okay to repeat more times.
for _ in range(vertices):
new_hashes = []
for v in range(vertices):
in_neighbors = [hashes[w] for w in range(vertices) if matrix[w, v]]
out_neighbors = [hashes[w] for w in range(vertices) if matrix[v, w]]
new_hashes.append(hashlib.md5(
(''.join(sorted(in_neighbors)) + '|' +
''.join(sorted(out_neighbors)) + '|' +
hashes[v]).encode('utf-8')).hexdigest())
hashes = new_hashes
fingerprint = hashlib.md5(str(sorted(hashes)).encode('utf-8')).hexdigest()
return fingerprint
import os
from peewee import CharField, FloatField, ForeignKeyField, IntegerField, Model
from playhouse.sqlite_ext import JSONField, SqliteExtDatabase
from nni.nas.benchmarks.constants import DATABASE_DIR
db = SqliteExtDatabase(os.path.join(DATABASE_DIR, 'nasbench101.db'), autoconnect=True)
class Nb101TrialConfig(Model):
"""
Trial config for NAS-Bench-101.
Attributes
----------
arch : dict
A dict with keys ``op1``, ``op2``, ... and ``input1``, ``input2``, ... Vertices are
enumerate from 0. Since node 0 is input node, it is skipped in this dict. Each ``op``
is one of :const:`nni.nas.benchmark.nasbench101.CONV3X3_BN_RELU`,
:const:`nni.nas.benchmark.nasbench101.CONV1X1_BN_RELU`, and :const:`nni.nas.benchmark.nasbench101.MAXPOOL3X3`.
Each ``input`` is a list of previous nodes. For example ``input5`` can be ``[0, 1, 3]``.
num_vertices : int
Number of vertices (nodes) in one cell. Should be less than or equal to 7 in default setup.
hash : str
Graph-invariant MD5 string for this architecture.
num_epochs : int
Number of epochs planned for this trial. Should be one of 4, 12, 36, 108 in default setup.
"""
arch = JSONField(index=True)
num_vertices = IntegerField(index=True)
hash = CharField(max_length=64, index=True)
num_epochs = IntegerField(index=True)
class Meta:
database = db
class Nb101TrialStats(Model):
"""
Computation statistics for NAS-Bench-101. Each corresponds to one trial.
Each config has multiple trials with different random seeds, but unfortunately seed for each trial is unavailable.
NAS-Bench-101 trains and evaluates on CIFAR-10 by default. The original training set is divided into
40k training images and 10k validation images, and the original validation set is used for test only.
Attributes
----------
config : Nb101TrialConfig
Setup for this trial data.
train_acc : float
Final accuracy on training data, ranging from 0 to 100.
valid_acc : float
Final accuracy on validation data, ranging from 0 to 100.
test_acc : float
Final accuracy on test data, ranging from 0 to 100.
parameters : float
Number of trainable parameters in million.
training_time : float
Duration of training in seconds.
"""
config = ForeignKeyField(Nb101TrialConfig, backref='trial_stats', index=True)
train_acc = FloatField()
valid_acc = FloatField()
test_acc = FloatField()
parameters = FloatField()
training_time = FloatField()
class Meta:
database = db
class Nb101IntermediateStats(Model):
"""
Intermediate statistics for NAS-Bench-101.
Attributes
----------
trial : Nb101TrialStats
The exact trial where the intermediate result is produced.
current_epoch : int
Elapsed epochs when evaluation is done.
train_acc : float
Intermediate accuracy on training data, ranging from 0 to 100.
valid_acc : float
Intermediate accuracy on validation data, ranging from 0 to 100.
test_acc : float
Intermediate accuracy on test data, ranging from 0 to 100.
training_time : float
Time elapsed in seconds.
"""
trial = ForeignKeyField(Nb101TrialStats, backref='intermediates', index=True)
current_epoch = IntegerField(index=True)
train_acc = FloatField()
valid_acc = FloatField()
test_acc = FloatField()
training_time = FloatField()
class Meta:
database = db
import functools
from peewee import fn
from playhouse.shortcuts import model_to_dict
from .model import Nb101TrialStats, Nb101TrialConfig
from .graph_util import hash_module, infer_num_vertices
def query_nb101_trial_stats(arch, num_epochs, isomorphism=True, reduction=None):
"""
Query trial stats of NAS-Bench-101 given conditions.
Parameters
----------
arch : dict or None
If a dict, it is in the format that is described in
:class:`nni.nas.benchmark.nasbench101.Nb101TrialConfig`. Only trial stats
matched will be returned. If none, architecture will be a wildcard.
num_epochs : int or None
If int, matching results will be returned. Otherwise a wildcard.
isomorphism : boolean
Whether to match essentially-same architecture, i.e., architecture with the
same graph-invariant hash value.
reduction : str or None
If 'none' or None, all trial stats will be returned directly.
If 'mean', fields in trial stats will be averaged given the same trial config.
Returns
-------
generator of dict
A generator of :class:`nni.nas.benchmark.nasbench101.Nb101TrialStats` objects,
where each of them has been converted into a dict.
"""
fields = []
if reduction == 'none':
reduction = None
if reduction == 'mean':
for field_name in Nb101TrialStats._meta.sorted_field_names:
if field_name not in ['id', 'config']:
fields.append(fn.AVG(getattr(Nb101TrialStats, field_name)).alias(field_name))
elif reduction is None:
fields.append(Nb101TrialStats)
else:
raise ValueError('Unsupported reduction: \'%s\'' % reduction)
query = Nb101TrialStats.select(*fields, Nb101TrialConfig).join(Nb101TrialConfig)
conditions = []
if arch is not None:
if isomorphism:
num_vertices = infer_num_vertices(arch)
conditions.append(Nb101TrialConfig.hash == hash_module(arch, num_vertices))
else:
conditions.append(Nb101TrialConfig.arch == arch)
if num_epochs is not None:
conditions.append(Nb101TrialConfig.num_epochs == num_epochs)
if conditions:
query = query.where(functools.reduce(lambda a, b: a & b, conditions))
if reduction is not None:
query = query.group_by(Nb101TrialStats.config)
for k in query:
yield model_to_dict(k)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment