Unverified Commit bf8be1e7 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

Merge pull request #2837 from microsoft/v1.8

Merge v1.8 back to master
parents 320407b1 e06a9dda
...@@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a ...@@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm. * Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm.
* ML Platform owners who want to **support AutoML in their platform**. * ML Platform owners who want to **support AutoML in their platform**.
### **[NNI v1.7 has been released!](https://github.com/microsoft/nni/releases) &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>** ### **[NNI v1.8 has been released!](https://github.com/microsoft/nni/releases) &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
## **NNI capabilities in a glance** ## **NNI capabilities in a glance**
...@@ -246,7 +246,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is ...@@ -246,7 +246,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via clone the source code. * Download the examples via clone the source code.
```bash ```bash
git clone -b v1.7 https://github.com/Microsoft/nni.git git clone -b v1.8 https://github.com/Microsoft/nni.git
``` ```
* Run the MNIST example. * Run the MNIST example.
......
...@@ -44,6 +44,7 @@ $env:PATH = $NNI_NODE_FOLDER+';'+$env:PATH ...@@ -44,6 +44,7 @@ $env:PATH = $NNI_NODE_FOLDER+';'+$env:PATH
cd $CWD\..\..\src\nni_manager cd $CWD\..\..\src\nni_manager
yarn yarn
yarn build yarn build
Copy-Item config -Destination .\dist\ -Recurse -Force
cd $CWD\..\..\src\webui cd $CWD\..\..\src\webui
yarn yarn
yarn build yarn build
......
...@@ -60,7 +60,8 @@ From the experiment result, we get the following conclusions: ...@@ -60,7 +60,8 @@ From the experiment result, we get the following conclusions:
* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments. * The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/ModelSpeedup.md). This avoids potential issues of counting them of masked models. * Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/ModelSpeedup.md).
This avoids potential issues of counting them of masked models.
* The experiment code can be found [here]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py). * The experiment code can be found [here]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py).
...@@ -75,8 +76,8 @@ From the experiment result, we get the following conclusions: ...@@ -75,8 +76,8 @@ From the experiment result, we get the following conclusions:
} }
``` ```
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/experiment_data). * The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/experiment_data/analyze.py) to plot new performance comparison figures. You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.
## Contribution ## Contribution
......
...@@ -42,7 +42,7 @@ Pruning algorithms compress the original network by removing redundant weights o ...@@ -42,7 +42,7 @@ Pruning algorithms compress the original network by removing redundant weights o
| [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) | | [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) | | [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) |
You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/Benchmark.md) for the performance of these pruners on some benchmark problems. You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.
### Quantization Algorithms ### Quantization Algorithms
......
# NAS Benchmarks (experimental) # NAS Benchmarks
```eval_rst ```eval_rst
.. toctree:: .. toctree::
...@@ -8,12 +8,13 @@ ...@@ -8,12 +8,13 @@
``` ```
## Introduction ## Introduction
To imporve the reproducibility of NAS algorithms as well as reducing computing resource requirements, researchers proposed a series of NAS benchmarks such as [NAS-Bench-101](https://arxiv.org/abs/1902.09635), [NAS-Bench-201](https://arxiv.org/abs/2001.00326), [NDS](https://arxiv.org/abs/1905.13214), etc. NNI provides a query interface for users to acquire these benchmarks. Within just a few lines of code, researcher are able to evaluate their NAS algorithms easily and fairly by utilizing these benchmarks. To imporve the reproducibility of NAS algorithms as well as reducing computing resource requirements, researchers proposed a series of NAS benchmarks such as [NAS-Bench-101](https://arxiv.org/abs/1902.09635), [NAS-Bench-201](https://arxiv.org/abs/2001.00326), [NDS](https://arxiv.org/abs/1905.13214), etc. NNI provides a query interface for users to acquire these benchmarks. Within just a few lines of code, researcher are able to evaluate their NAS algorithms easily and fairly by utilizing these benchmarks.
## Prerequisites ## Prerequisites
* Please prepare a folder to household all the benchmark databases. By default, it can be found at `${HOME}/.nni/nasbenchmark`. You can place it anywhere you like, and specify it in `NASBENCHMARK_DIR` before importing NNI. * Please prepare a folder to household all the benchmark databases. By default, it can be found at `${HOME}/.nni/nasbenchmark`. You can place it anywhere you like, and specify it in `NASBENCHMARK_DIR` via `export NASBENCHMARK_DIR=/path/to/your/nasbenchmark` before importing NNI.
* Please install `peewee` via `pip install peewee`, which NNI uses to connect to database. * Please install `peewee` via `pip3 install peewee`, which NNI uses to connect to database.
## Data Preparation ## Data Preparation
...@@ -24,7 +25,7 @@ To avoid storage and legality issues, we do not provide any prepared databases. ...@@ -24,7 +25,7 @@ To avoid storage and legality issues, we do not provide any prepared databases.
git clone -b ${NNI_VERSION} https://github.com/microsoft/nni git clone -b ${NNI_VERSION} https://github.com/microsoft/nni
cd nni/examples/nas/benchmarks cd nni/examples/nas/benchmarks
``` ```
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.7`. Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.8`.
2. Install dependencies via `pip3 install -r xxx.requirements.txt`. `xxx` can be `nasbench101`, `nasbench201` or `nds`. 2. Install dependencies via `pip3 install -r xxx.requirements.txt`. `xxx` can be `nasbench101`, `nasbench201` or `nds`.
3. Generate the database via `./xxx.sh`. The directory that stores the benchmark file can be configured with `NASBENCHMARK_DIR` environment variable, which defaults to `~/.nni/nasbenchmark`. Note that the NAS-Bench-201 dataset will be downloaded from a google drive. 3. Generate the database via `./xxx.sh`. The directory that stores the benchmark file can be configured with `NASBENCHMARK_DIR` environment variable, which defaults to `~/.nni/nasbenchmark`. Note that the NAS-Bench-201 dataset will be downloaded from a google drive.
......
# ChangeLog # ChangeLog
# Release 1.8 - 8/27/2020
## Major updates
### Training service
* Access trial log directly on WebUI (local mode only) (#2718)
* Add OpenPAI trial job detail link (#2703)
* Support GPU scheduler in reusable environment (#2627) (#2769)
* Add timeout for `web_channel` in `trial_runner` (#2710)
* Show environment error message in AzureML mode (#2724)
* Add more log information when copying data in OpenPAI mode (#2702)
### WebUI, nnictl and nnicli
* Improve hyper-parameter parallel coordinates plot (#2691) (#2759)
* Add pagination for trial job list (#2738) (#2773)
* Enable panel close when clicking overlay region (#2734)
* Remove support for Multiphase on WebUI (#2760)
* Support save and restore experiments (#2750)
* Add intermediate results in export result (#2706)
* Add [command](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Tutorial/Nnictl.md#nnictl-trial) to list trial results with highest/lowest metrics (#2747)
* Improve the user experience of [nnicli](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/nnicli_ref.md) with [examples](https://github.com/microsoft/nni/blob/v1.8/examples/notebooks/retrieve_nni_info_with_python.ipynb) (#2713)
### Neural architecture search
* [Search space zoo: ENAS and DARTS](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/NAS/SearchSpaceZoo.md) (#2589)
* API to query intermediate results in NAS benchmark (#2728)
### Model compression
* Support the List/Tuple Construct/Unpack operation for TorchModuleGraph (#2609)
* Model speedup improvement: Add support of DenseNet and InceptionV3 (#2719)
* Support the multiple successive tuple unpack operations (#2768)
* [Doc of comparing the performance of supported pruners](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/CommunitySharings/ModelCompressionComparison.md) (#2742)
* New pruners: [Sensitivity pruner](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Compressor/Pruner.md#sensitivity-pruner) (#2684) and [AMC pruner](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Compressor/Pruner.md) (#2573) (#2786)
* TensorFlow v2 support in model compression (#2755)
### Backward incompatible changes
* Update the default experiment folder from `$HOME/nni/experiments` to `$HOME/nni-experiments`. If you want to view the experiments created by previous NNI releases, you can move the experiments folders from `$HOME/nni/experiments` to `$HOME/nni-experiments` manually. (#2686) (#2753)
* Dropped support for Python 3.5 and scikit-learn 0.20 (#2778) (#2777) (2783) (#2787) (#2788) (#2790)
### Others
* Upgrade TensorFlow version in Docker image (#2732) (#2735) (#2720)
## Examples
* Remove gpuNum in assessor examples (#2641)
## Documentation
* Improve customized tuner documentation (#2628)
* Fix several typos and grammar mistakes in documentation (#2637 #2638, thanks @tomzx)
* Improve AzureML training service documentation (#2631)
* Improve CI of Chinese translation (#2654)
* Improve OpenPAI training service documenation (#2685)
* Improve documentation of community sharing (#2640)
* Add tutorial of Colab support (#2700)
* Improve documentation structure for model compression (#2676)
## Bug fixes
* Fix mkdir error in training service (#2673)
* Fix bug when using chmod in remote training service (#2689)
* Fix dependency issue by making `_graph_utils` imported inline (#2675)
* Fix mask issue in `SimulatedAnnealingPruner` (#2736)
* Fix intermediate graph zooming issue (#2738)
* Fix issue when dict is unordered when querying NAS benchmark (#2728)
* Fix import issue for gradient selector dataloader iterator (#2690)
* Fix support of adding tens of machines in remote training service (#2725)
* Fix several styling issues in WebUI (#2762 #2737)
* Fix support of unusual types in metrics including NaN and Infinity (#2782)
* Fix nnictl experiment delete (#2791)
# Release 1.7 - 7/8/2020 # Release 1.7 - 7/8/2020
## Major Features ## Major Features
......
...@@ -89,4 +89,4 @@ cd nni/examples/trials/mnist-tfv1 ...@@ -89,4 +89,4 @@ cd nni/examples/trials/mnist-tfv1
nnictl create --config config_aml.yml nnictl create --config config_aml.yml
``` ```
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.7`. Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.8`.
...@@ -19,7 +19,7 @@ Installation on Linux and macOS follow the same instructions, given below. ...@@ -19,7 +19,7 @@ Installation on Linux and macOS follow the same instructions, given below.
Prerequisites: `python 64-bit >=3.6`, `git`, `wget` Prerequisites: `python 64-bit >=3.6`, `git`, `wget`
```bash ```bash
git clone -b v1.7 https://github.com/Microsoft/nni.git git clone -b v1.8 https://github.com/Microsoft/nni.git
cd nni cd nni
./install.sh ./install.sh
``` ```
...@@ -35,7 +35,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is ...@@ -35,7 +35,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via cloning the source code. * Download the examples via cloning the source code.
```bash ```bash
git clone -b v1.7 https://github.com/Microsoft/nni.git git clone -b v1.8 https://github.com/Microsoft/nni.git
``` ```
* Run the MNIST example. * Run the MNIST example.
......
...@@ -29,7 +29,7 @@ If you want to contribute to NNI, refer to [setup development environment](Setup ...@@ -29,7 +29,7 @@ If you want to contribute to NNI, refer to [setup development environment](Setup
* From source code * From source code
```bat ```bat
git clone -b v1.7 https://github.com/Microsoft/nni.git git clone -b v1.8 https://github.com/Microsoft/nni.git
cd nni cd nni
powershell -ExecutionPolicy Bypass -file install.ps1 powershell -ExecutionPolicy Bypass -file install.ps1
``` ```
...@@ -41,7 +41,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is ...@@ -41,7 +41,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Clone examples within source code. * Clone examples within source code.
```bat ```bat
git clone -b v1.7 https://github.com/Microsoft/nni.git git clone -b v1.8 https://github.com/Microsoft/nni.git
``` ```
* Run the MNIST example. * Run the MNIST example.
......
...@@ -29,7 +29,7 @@ author = 'Microsoft' ...@@ -29,7 +29,7 @@ author = 'Microsoft'
# The short X.Y version # The short X.Y version
version = '' version = ''
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = 'v1.7' release = 'v1.8'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------
......
...@@ -28,21 +28,31 @@ def get_dataset(dataset_name='mnist'): ...@@ -28,21 +28,31 @@ def get_dataset(dataset_name='mnist'):
def create_model(model_name='naive'): def create_model(model_name='naive'):
assert model_name == 'naive' assert model_name == 'naive'
return tf.keras.Sequential([ return NaiveModel()
tf.keras.layers.Conv2D(filters=20, kernel_size=5),
tf.keras.layers.BatchNormalization(), class NaiveModel(tf.keras.Model):
tf.keras.layers.ReLU(), def __init__(self):
tf.keras.layers.MaxPool2D(pool_size=2), super().__init__()
tf.keras.layers.Conv2D(filters=20, kernel_size=5), self.seq_layers = [
tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(filters=20, kernel_size=5),
tf.keras.layers.ReLU(), tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPool2D(pool_size=2), tf.keras.layers.ReLU(),
tf.keras.layers.Flatten(), tf.keras.layers.MaxPool2D(pool_size=2),
tf.keras.layers.Dense(units=500), tf.keras.layers.Conv2D(filters=20, kernel_size=5),
tf.keras.layers.ReLU(), tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(units=10), tf.keras.layers.ReLU(),
tf.keras.layers.Softmax() tf.keras.layers.MaxPool2D(pool_size=2),
]) tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=500),
tf.keras.layers.ReLU(),
tf.keras.layers.Dense(units=10),
tf.keras.layers.Softmax()
]
def call(self, x):
for layer in self.seq_layers:
x = layer(x)
return x
def create_pruner(model, pruner_name): def create_pruner(model, pruner_name):
...@@ -55,20 +65,40 @@ def main(args): ...@@ -55,20 +65,40 @@ def main(args):
model_name = prune_config[args.pruner_name]['model_name'] model_name = prune_config[args.pruner_name]['model_name']
dataset_name = prune_config[args.pruner_name]['dataset_name'] dataset_name = prune_config[args.pruner_name]['dataset_name']
train_set, test_set = get_dataset(dataset_name) train_set, test_set = get_dataset(dataset_name)
model = create_model(model_name) model = create_model(model_name)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9, decay=1e-4)
model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
print('start training') print('start training')
model.fit(train_set[0], train_set[1], batch_size=args.batch_size, epochs=args.pretrain_epochs, validation_data=test_set) optimizer = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9, decay=1e-4)
model.compile(
optimizer=optimizer,
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model.fit(
train_set[0],
train_set[1],
batch_size=args.batch_size,
epochs=args.pretrain_epochs,
validation_data=test_set
)
print('start model pruning') print('start model pruning')
optimizer_finetune = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, decay=1e-4) optimizer_finetune = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, decay=1e-4)
pruner = create_pruner(model, args.pruner_name) pruner = create_pruner(model, args.pruner_name)
model = pruner.compress() model = pruner.compress()
model.compile(optimizer=optimizer_finetune, loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.compile(
model.fit(train_set[0], train_set[1], batch_size=args.batch_size, epochs=args.prune_epochs, validation_data=test_set) optimizer=optimizer_finetune,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
run_eagerly=True # NOTE: Important, model compression does not work in graph mode!
)
model.fit(
train_set[0],
train_set[1],
batch_size=args.batch_size,
epochs=args.prune_epochs,
validation_data=test_set
)
if __name__ == '__main__': if __name__ == '__main__':
......
...@@ -53,7 +53,7 @@ class MobileNet(nn.Module): ...@@ -53,7 +53,7 @@ class MobileNet(nn.Module):
def forward(self, x): def forward(self, x):
x = self.conv1(x) x = self.conv1(x)
x = self.features(x) x = self.features(x)
x = x.mean(3).mean(2) # global average pooling x = x.mean([2, 3]) # global average pooling
x = self.classifier(x) x = self.classifier(x)
return x return x
......
...@@ -108,7 +108,10 @@ class MobileNetV2(nn.Module): ...@@ -108,7 +108,10 @@ class MobileNetV2(nn.Module):
def forward(self, x): def forward(self, x):
x = self.features(x) x = self.features(x)
x = x.mean(3).mean(2) # it's same with .mean(3).mean(2), but
# speedup only suport the mean option
# whose output only have two dimensions
x = x.mean([2, 3])
x = self.classifier(x) x = self.classifier(x)
return x return x
......
...@@ -15,5 +15,5 @@ fi ...@@ -15,5 +15,5 @@ fi
echo "Generating database..." echo "Generating database..."
rm -f ${NASBENCHMARK_DIR}/nasbench101.db ${NASBENCHMARK_DIR}/nasbench101.db-journal rm -f ${NASBENCHMARK_DIR}/nasbench101.db ${NASBENCHMARK_DIR}/nasbench101.db-journal
mkdir -p ${NASBENCHMARK_DIR} mkdir -p ${NASBENCHMARK_DIR}
python -m nni.nas.benchmarks.nasbench101.db_gen nasbench_full.tfrecord python3 -m nni.nas.benchmarks.nasbench101.db_gen nasbench_full.tfrecord
rm -f nasbench_full.tfrecord rm -f nasbench_full.tfrecord
...@@ -15,5 +15,5 @@ fi ...@@ -15,5 +15,5 @@ fi
echo "Generating database..." echo "Generating database..."
rm -f ${NASBENCHMARK_DIR}/nasbench201.db ${NASBENCHMARK_DIR}/nasbench201.db-journal rm -f ${NASBENCHMARK_DIR}/nasbench201.db ${NASBENCHMARK_DIR}/nasbench201.db-journal
mkdir -p ${NASBENCHMARK_DIR} mkdir -p ${NASBENCHMARK_DIR}
python -m nni.nas.benchmarks.nasbench201.db_gen a.pth python3 -m nni.nas.benchmarks.nasbench201.db_gen a.pth
rm -f a.pth rm -f a.pth
...@@ -16,5 +16,5 @@ unzip data.zip ...@@ -16,5 +16,5 @@ unzip data.zip
echo "Generating database..." echo "Generating database..."
rm -f ${NASBENCHMARK_DIR}/nds.db ${NASBENCHMARK_DIR}/nds.db-journal rm -f ${NASBENCHMARK_DIR}/nds.db ${NASBENCHMARK_DIR}/nds.db-journal
mkdir -p ${NASBENCHMARK_DIR} mkdir -p ${NASBENCHMARK_DIR}
python -m nni.nas.benchmarks.nds.db_gen nds_data python3 -m nni.nas.benchmarks.nds.db_gen nds_data
rm -rf data.zip nds_data rm -rf data.zip nds_data
...@@ -14,7 +14,7 @@ from nni.nas.pytorch.darts import DartsTrainer ...@@ -14,7 +14,7 @@ from nni.nas.pytorch.darts import DartsTrainer
from utils import accuracy from utils import accuracy
from nni.nas.pytorch.search_space_zoo import DartsCell from nni.nas.pytorch.search_space_zoo import DartsCell
from darts_search_space import DartsStackedCells from darts_stack_cells import DartsStackedCells
logger = logging.getLogger('nni') logger = logging.getLogger('nni')
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# Licensed under the MIT license. # Licensed under the MIT license.
import torch.nn as nn import torch.nn as nn
import ops from nni.nas.pytorch.search_space_zoo.darts_ops import DropPath
class DartsStackedCells(nn.Module): class DartsStackedCells(nn.Module):
...@@ -79,5 +79,5 @@ class DartsStackedCells(nn.Module): ...@@ -79,5 +79,5 @@ class DartsStackedCells(nn.Module):
def drop_path_prob(self, p): def drop_path_prob(self, p):
for module in self.modules(): for module in self.modules():
if isinstance(module, ops.DropPath): if isinstance(module, DropPath):
module.p = p module.p = p
...@@ -58,7 +58,6 @@ if __name__ == "__main__": ...@@ -58,7 +58,6 @@ if __name__ == "__main__":
parser = ArgumentParser("enas") parser = ArgumentParser("enas")
parser.add_argument("--batch-size", default=128, type=int) parser.add_argument("--batch-size", default=128, type=int)
parser.add_argument("--log-frequency", default=10, type=int) parser.add_argument("--log-frequency", default=10, type=int)
# parser.add_argument("--search-for", choices=["macro", "micro"], default="macro")
parser.add_argument("--epochs", default=None, type=int, help="Number of epochs (default: macro 310, micro 150)") parser.add_argument("--epochs", default=None, type=int, help="Number of epochs (default: macro 310, micro 150)")
parser.add_argument("--visualization", default=False, action="store_true") parser.add_argument("--visualization", default=False, action="store_true")
args = parser.parse_args() args = parser.parse_args()
...@@ -71,7 +70,6 @@ if __name__ == "__main__": ...@@ -71,7 +70,6 @@ if __name__ == "__main__":
criterion = nn.CrossEntropyLoss() criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), 0.05, momentum=0.9, weight_decay=1.0E-4) optimizer = torch.optim.SGD(model.parameters(), 0.05, momentum=0.9, weight_decay=1.0E-4)
lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_epochs, eta_min=0.001) lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_epochs, eta_min=0.001)
trainer = enas.EnasTrainer(model, trainer = enas.EnasTrainer(model,
loss=criterion, loss=criterion,
metrics=accuracy, metrics=accuracy,
......
...@@ -62,7 +62,7 @@ class MicroNetwork(nn.Module): ...@@ -62,7 +62,7 @@ class MicroNetwork(nn.Module):
reduction = False reduction = False
if layer_id in pool_layers: if layer_id in pool_layers:
c_cur, reduction = c_p * 2, True c_cur, reduction = c_p * 2, True
self.layers.append(ENASMicroLayer(self.layers, num_nodes, c_pp, c_p, c_cur, reduction)) self.layers.append(ENASMicroLayer(num_nodes, c_pp, c_p, c_cur, reduction))
if reduction: if reduction:
c_pp = c_p = c_cur c_pp = c_p = c_cur
c_pp, c_p = c_p, c_cur c_pp, c_p = c_p, c_cur
...@@ -98,7 +98,6 @@ if __name__ == "__main__": ...@@ -98,7 +98,6 @@ if __name__ == "__main__":
parser = ArgumentParser("enas") parser = ArgumentParser("enas")
parser.add_argument("--batch-size", default=128, type=int) parser.add_argument("--batch-size", default=128, type=int)
parser.add_argument("--log-frequency", default=10, type=int) parser.add_argument("--log-frequency", default=10, type=int)
# parser.add_argument("--search-for", choices=["macro", "micro"], default="macro")
parser.add_argument("--epochs", default=None, type=int, help="Number of epochs (default: macro 310, micro 150)") parser.add_argument("--epochs", default=None, type=int, help="Number of epochs (default: macro 310, micro 150)")
parser.add_argument("--visualization", default=False, action="store_true") parser.add_argument("--visualization", default=False, action="store_true")
args = parser.parse_args() args = parser.parse_args()
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment