Unverified Commit cef9babd authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

[Doc] NAS (#4584)

parent ad5aff39
.. c9ab8a1f91c587ad72d66b6c43e06528
One-shot NAS
============
One-Shot NAS 算法利用了搜索空间中模型间的权重共享来训练超网络,并使用超网络来指导选择出更好的模型。 与从头训练每个模型(我们称之为 "Multi-trial NAS")算法相比,此类算法大大减少了使用的计算资源。 NNI 支持下列流行的 One-Shot NAS 算法。
.. toctree::
:maxdepth: 1
运行 One-shot NAS <OneshotTrainer>
ENAS <ENAS>
DARTS <DARTS>
SPOS <SPOS>
ProxylessNAS <Proxylessnas>
FBNet <FBNet>
自定义 One-shot NAS <WriteOneshot>
\ No newline at end of file
...@@ -50,6 +50,7 @@ extensions = [ ...@@ -50,6 +50,7 @@ extensions = [
'sphinx.ext.napoleon', 'sphinx.ext.napoleon',
'sphinx.ext.viewcode', 'sphinx.ext.viewcode',
'sphinx.ext.intersphinx', 'sphinx.ext.intersphinx',
'sphinxcontrib.bibtex',
# 'nbsphinx', # nbsphinx has conflicts with sphinx-gallery. # 'nbsphinx', # nbsphinx has conflicts with sphinx-gallery.
'sphinx.ext.extlinks', 'sphinx.ext.extlinks',
'IPython.sphinxext.ipython_console_highlighting', 'IPython.sphinxext.ipython_console_highlighting',
...@@ -63,6 +64,15 @@ extensions = [ ...@@ -63,6 +64,15 @@ extensions = [
# Add mock modules # Add mock modules
autodoc_mock_imports = ['apex', 'nni_node', 'tensorrt', 'pycuda', 'nn_meter'] autodoc_mock_imports = ['apex', 'nni_node', 'tensorrt', 'pycuda', 'nn_meter']
# Bibliography files
bibtex_bibfiles = ['refs.bib']
# Add a heading to bibliography
bibtex_footbibliography_header = '.. rubric:: Bibliography'
# Set bibliography style
bibtex_default_style = 'plain'
# Sphinx gallery examples # Sphinx gallery examples
sphinx_gallery_conf = { sphinx_gallery_conf = {
'examples_dirs': '../../examples/tutorials', # path to your example scripts 'examples_dirs': '../../examples/tutorials', # path to your example scripts
......
...@@ -22,7 +22,7 @@ Neural Network Intelligence ...@@ -22,7 +22,7 @@ Neural Network Intelligence
Overview Overview
Auto (Hyper-parameter) Tuning <hyperparameter_tune> Auto (Hyper-parameter) Tuning <hyperparameter_tune>
Neural Architecture Search <nas> Neural Architecture Search <nas/index>
Model Compression <model_compression> Model Compression <model_compression>
Feature Engineering <feature_engineering> Feature Engineering <feature_engineering>
Experiment <experiment/overview> Experiment <experiment/overview>
......
.. 84c45eb2b0a2f307a1c2568dc7bab842 .. a533c818ceeb6667df82045885d5ccb5
########################### ###########################
Neural Network Intelligence Neural Network Intelligence
...@@ -15,7 +15,7 @@ Neural Network Intelligence ...@@ -15,7 +15,7 @@ Neural Network Intelligence
入门<Tutorial/QuickStart> 入门<Tutorial/QuickStart>
教程<tutorials> 教程<tutorials>
自动(超参数)调优 <hyperparameter_tune> 自动(超参数)调优 <hyperparameter_tune>
神经网络架构搜索<nas> 神经网络架构搜索<nas/index>
模型压缩<model_compression> 模型压缩<model_compression>
特征工程<feature_engineering> 特征工程<feature_engineering>
NNI实验 <experiment/overview> NNI实验 <experiment/overview>
......
#############################################
Retiarii for Neural Architecture Search (NAS)
#############################################
Automatic neural architecture search is taking an increasingly important role on finding better models.
Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually tuned models.
Some of representative works are NASNet, ENAS, DARTS, Network Morphism, and Evolution. Moreover, new innovations keep emerging.
However, it takes great efforts to implement NAS algorithms, and it is hard to reuse code base of existing algorithms in a new one.
To facilitate NAS innovations (e.g., design and implement new NAS models, compare different NAS models side-by-side),
an easy-to-use and flexible programming interface is crucial.
Thus, we design `Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__. It is a deep learning framework that supports the exploratory training on a neural network model space, rather than on a single neural network model.
Exploratory training with Retiarii allows user to express various search spaces for *Neural Architecture Search* and *Hyper-Parameter Tuning* with high flexibility.
Some frequently used terminologies in this document:
* *Model search space*: it means a set of models from which the best model is explored/searched. Sometimes we use *search space* or *model space* in short.
* *Exploration strategy*: the algorithm that is used to explore a model search space.
* *Model evaluator*: it is used to train a model and evaluate the model's performance.
Follow the instructions below to start your journey with Retiarii.
.. toctree::
:maxdepth: 2
Overview <NAS/Overview>
Quick Start <NAS/QuickStart>
Construct Model Space <NAS/construct_space>
Multi-trial NAS <NAS/multi_trial_nas>
One-shot NAS <NAS/one_shot_nas>
Hardware-aware NAS <NAS/HardwareAwareNAS>
NAS Benchmarks <NAS/Benchmarks>
NAS API References <NAS/ApiReference>
Advanced Usage
==============
.. toctree::
:maxdepth: 2
execution_engine
hardware_aware_nas
mutator
customize_strategy
serialization
benchmarks
NAS Benchmarks NAS Benchmarks
============== ==============
.. code-block:: .. toctree::
:hidden:
.. toctree:: Example usage of NAS benchmarks </tutorials/nasbench_as_dataset>
:hidden:
Example Usages <BenchmarksExample> .. note:: :doc:`Example usage of NAS benchmarks </tutorials/nasbench_as_dataset>`.
Introduction Introduction
------------ ------------
To improve the reproducibility of NAS algorithms as well as reducing computing resource requirements, researchers proposed a series of NAS benchmarks such as `NAS-Bench-101 <https://arxiv.org/abs/1902.09635>`__\ , `NAS-Bench-201 <https://arxiv.org/abs/2001.00326>`__\ , `NDS <https://arxiv.org/abs/1905.13214>`__\ , etc. NNI provides a query interface for users to acquire these benchmarks. Within just a few lines of code, researcher are able to evaluate their NAS algorithms easily and fairly by utilizing these benchmarks. To improve the reproducibility of NAS algorithms as well as reducing computing resource requirements, researchers proposed a series of NAS benchmarks such as `NAS-Bench-101 <https://arxiv.org/abs/1902.09635>`__, `NAS-Bench-201 <https://arxiv.org/abs/2001.00326>`__, `NDS <https://arxiv.org/abs/1905.13214>`__, etc. NNI provides a query interface for users to acquire these benchmarks. Within just a few lines of code, researcher are able to evaluate their NAS algorithms easily and fairly by utilizing these benchmarks.
Prerequisites Prerequisites
------------- -------------
* Please prepare a folder to household all the benchmark databases. By default, it can be found at ``${HOME}/.cache/nni/nasbenchmark``. Or you can place it anywhere you like, and specify it in ``NASBENCHMARK_DIR`` via ``export NASBENCHMARK_DIR=/path/to/your/nasbenchmark`` before importing NNI. * Please prepare a folder to household all the benchmark databases. By default, it can be found at ``${HOME}/.cache/nni/nasbenchmark``. Or you can place it anywhere you like, and specify it in ``NASBENCHMARK_DIR`` via ``export NASBENCHMARK_DIR=/path/to/your/nasbenchmark`` before importing NNI.
* Please install ``peewee`` via ``pip3 install peewee``\ , which NNI uses to connect to database. * Please install ``peewee`` via ``pip3 install peewee``, which NNI uses to connect to database.
Data Preparation Data Preparation
---------------- ----------------
...@@ -33,8 +33,7 @@ Option 2 ...@@ -33,8 +33,7 @@ Option 2
.. note:: If you have files that are processed before v2.5, it is recommended that you delete them and try option 1. .. note:: If you have files that are processed before v2.5, it is recommended that you delete them and try option 1.
#. #. Clone NNI to your machine and enter ``examples/nas/benchmarks`` directory.
Clone NNI to your machine and enter ``examples/nas/benchmarks`` directory.
.. code-block:: bash .. code-block:: bash
...@@ -43,8 +42,7 @@ Option 2 ...@@ -43,8 +42,7 @@ Option 2
Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v2.4``. Replace ``${NNI_VERSION}`` with a released version name or branch name, e.g., ``v2.4``.
#. #. Install dependencies via ``pip3 install -r xxx.requirements.txt``. ``xxx`` can be ``nasbench101``, ``nasbench201`` or ``nds``.
Install dependencies via ``pip3 install -r xxx.requirements.txt``. ``xxx`` can be ``nasbench101``\ , ``nasbench201`` or ``nds``.
#. Generate the database via ``./xxx.sh``. The directory that stores the benchmark file can be configured with ``NASBENCHMARK_DIR`` environment variable, which defaults to ``~/.nni/nasbenchmark``. Note that the NAS-Bench-201 dataset will be downloaded from a google drive. #. Generate the database via ``./xxx.sh``. The directory that stores the benchmark file can be configured with ``NASBENCHMARK_DIR`` environment variable, which defaults to ``~/.nni/nasbenchmark``. Note that the NAS-Bench-201 dataset will be downloaded from a google drive.
...@@ -53,7 +51,7 @@ Please make sure there is at least 10GB free disk space and note that the conver ...@@ -53,7 +51,7 @@ Please make sure there is at least 10GB free disk space and note that the conver
Example Usages Example Usages
-------------- --------------
Please refer to `examples usages of Benchmarks API <./BenchmarksExample.rst>`__. Please refer to `examples usages of Benchmarks API <../BenchmarksExample.rst>`__.
NAS-Bench-101 NAS-Bench-101
------------- -------------
...@@ -61,36 +59,16 @@ NAS-Bench-101 ...@@ -61,36 +59,16 @@ NAS-Bench-101
* `Paper link <https://arxiv.org/abs/1902.09635>`__ * `Paper link <https://arxiv.org/abs/1902.09635>`__
* `Open-source <https://github.com/google-research/nasbench>`__ * `Open-source <https://github.com/google-research/nasbench>`__
NAS-Bench-101 contains 423,624 unique neural networks, combined with 4 variations in number of epochs (4, 12, 36, 108), each of which is trained 3 times. It is a cell-wise search space, which constructs and stacks a cell by enumerating DAGs with at most 7 operators, and no more than 9 connections. All operators can be chosen from ``CONV3X3_BN_RELU``\ , ``CONV1X1_BN_RELU`` and ``MAXPOOL3X3``\ , except the first operator (always ``INPUT``\ ) and last operator (always ``OUTPUT``\ ). NAS-Bench-101 contains 423,624 unique neural networks, combined with 4 variations in number of epochs (4, 12, 36, 108), each of which is trained 3 times. It is a cell-wise search space, which constructs and stacks a cell by enumerating DAGs with at most 7 operators, and no more than 9 connections. All operators can be chosen from ``CONV3X3_BN_RELU``, ``CONV1X1_BN_RELU`` and ``MAXPOOL3X3``, except the first operator (always ``INPUT``\ ) and last operator (always ``OUTPUT``\ ).
Notably, NAS-Bench-101 eliminates invalid cells (e.g., there is no path from input to output, or there is redundant computation). Furthermore, isomorphic cells are de-duplicated, i.e., all the remaining cells are computationally unique. Notably, NAS-Bench-101 eliminates invalid cells (e.g., there is no path from input to output, or there is redundant computation). Furthermore, isomorphic cells are de-duplicated, i.e., all the remaining cells are computationally unique.
API Documentation API Documentation
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
.. autofunction:: nni.nas.benchmarks.nasbench101.query_nb101_trial_stats .. automodule:: nni.nas.benchmarks.nasbench101
:members:
.. autoattribute:: nni.nas.benchmarks.nasbench101.INPUT :imported-members:
.. autoattribute:: nni.nas.benchmarks.nasbench101.OUTPUT
.. autoattribute:: nni.nas.benchmarks.nasbench101.CONV3X3_BN_RELU
.. autoattribute:: nni.nas.benchmarks.nasbench101.CONV1X1_BN_RELU
.. autoattribute:: nni.nas.benchmarks.nasbench101.MAXPOOL3X3
.. autoclass:: nni.nas.benchmarks.nasbench101.Nb101TrialConfig
.. autoclass:: nni.nas.benchmarks.nasbench101.Nb101TrialStats
.. autoclass:: nni.nas.benchmarks.nasbench101.Nb101IntermediateStats
.. autofunction:: nni.nas.benchmarks.nasbench101.graph_util.nasbench_format_to_architecture_repr
.. autofunction:: nni.nas.benchmarks.nasbench101.graph_util.infer_num_vertices
.. autofunction:: nni.nas.benchmarks.nasbench101.graph_util.hash_module
NAS-Bench-201 NAS-Bench-201
------------- -------------
...@@ -99,28 +77,14 @@ NAS-Bench-201 ...@@ -99,28 +77,14 @@ NAS-Bench-201
* `Open-source API <https://github.com/D-X-Y/NAS-Bench-201>`__ * `Open-source API <https://github.com/D-X-Y/NAS-Bench-201>`__
* `Implementations <https://github.com/D-X-Y/AutoDL-Projects>`__ * `Implementations <https://github.com/D-X-Y/AutoDL-Projects>`__
NAS-Bench-201 is a cell-wise search space that views nodes as tensors and edges as operators. The search space contains all possible densely-connected DAGs with 4 nodes, resulting in 15,625 candidates in total. Each operator (i.e., edge) is selected from a pre-defined operator set (\ ``NONE``\ , ``SKIP_CONNECT``\ , ``CONV_1X1``\ , ``CONV_3X3`` and ``AVG_POOL_3X3``\ ). Training appraoches vary in the dataset used (CIFAR-10, CIFAR-100, ImageNet) and number of epochs scheduled (12 and 200). Each combination of architecture and training approach is repeated 1 - 3 times with different random seeds. NAS-Bench-201 is a cell-wise search space that views nodes as tensors and edges as operators. The search space contains all possible densely-connected DAGs with 4 nodes, resulting in 15,625 candidates in total. Each operator (i.e., edge) is selected from a pre-defined operator set (\ ``NONE``, ``SKIP_CONNECT``, ``CONV_1X1``, ``CONV_3X3`` and ``AVG_POOL_3X3``\ ). Training appraoches vary in the dataset used (CIFAR-10, CIFAR-100, ImageNet) and number of epochs scheduled (12 and 200). Each combination of architecture and training approach is repeated 1 - 3 times with different random seeds.
API Documentation API Documentation
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
.. autofunction:: nni.nas.benchmarks.nasbench201.query_nb201_trial_stats .. automodule:: nni.nas.benchmarks.nasbench201
:members:
.. autoattribute:: nni.nas.benchmarks.nasbench201.NONE :imported-members:
.. autoattribute:: nni.nas.benchmarks.nasbench201.SKIP_CONNECT
.. autoattribute:: nni.nas.benchmarks.nasbench201.CONV_1X1
.. autoattribute:: nni.nas.benchmarks.nasbench201.CONV_3X3
.. autoattribute:: nni.nas.benchmarks.nasbench201.AVG_POOL_3X3
.. autoclass:: nni.nas.benchmarks.nasbench201.Nb201TrialConfig
.. autoclass:: nni.nas.benchmarks.nasbench201.Nb201TrialStats
.. autoclass:: nni.nas.benchmarks.nasbench201.Nb201IntermediateStats
NDS NDS
--- ---
...@@ -137,45 +101,12 @@ Available Operators ...@@ -137,45 +101,12 @@ Available Operators
Here is a list of available operators used in NDS. Here is a list of available operators used in NDS.
.. autoattribute:: nni.nas.benchmarks.nds.constants.NONE .. automodule:: nni.nas.benchmarks.nds.constants
:noindex:
.. autoattribute:: nni.nas.benchmarks.nds.constants.SKIP_CONNECT
.. autoattribute:: nni.nas.benchmarks.nds.constants.AVG_POOL_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.MAX_POOL_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.MAX_POOL_5X5
.. autoattribute:: nni.nas.benchmarks.nds.constants.MAX_POOL_7X7
.. autoattribute:: nni.nas.benchmarks.nds.constants.CONV_1X1
.. autoattribute:: nni.nas.benchmarks.nds.constants.CONV_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.CONV_3X1_1X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.CONV_7X1_1X7
.. autoattribute:: nni.nas.benchmarks.nds.constants.DIL_CONV_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.DIL_CONV_5X5
.. autoattribute:: nni.nas.benchmarks.nds.constants.SEP_CONV_3X3
.. autoattribute:: nni.nas.benchmarks.nds.constants.SEP_CONV_5X5
.. autoattribute:: nni.nas.benchmarks.nds.constants.SEP_CONV_7X7
.. autoattribute:: nni.nas.benchmarks.nds.constants.DIL_SEP_CONV_3X3
API Documentation API Documentation
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
.. autofunction:: nni.nas.benchmarks.nds.query_nds_trial_stats .. automodule:: nni.nas.benchmarks.nds
:members:
.. autoclass:: nni.nas.benchmarks.nds.NdsTrialConfig :imported-members:
.. autoclass:: nni.nas.benchmarks.nds.NdsTrialStats
.. autoclass:: nni.nas.benchmarks.nds.NdsIntermediateStats
Construct Model Space
=====================
NNI provides powerful APIs for users to easily express model space (or search space).
Firstly, users can use high-level APIs (e.g., ValueChoice, LayerChoice) which are building blocks / skeletons of building blocks to construct their search space.
For advanced cases, NNI also provides interface to customize new mutators for expressing more complicated model spaces.
.. tip:: In most cases, this should be simple but expressive enough. We strongly recommend users to try them first, and report issues if those APIs are not satisfying.
.. _mutation-primitives:
Mutation Primitives
-------------------
To make users easily express a model space within their PyTorch/TensorFlow model, NNI provides some inline mutation APIs as shown below.
.. note:: We can actively adding more mutation primitives. If you have any suggestions, feel free to `ask here <https://github.com/microsoft/nni/issues>`__.
.. _nas-layer-choice:
LayerChoice
^^^^^^^^^^^
.. autoclass:: nni.retiarii.nn.pytorch.LayerChoice
:members:
.. _nas-input-choice:
InputChoice
^^^^^^^^^^^
.. autoclass:: nni.retiarii.nn.pytorch.InputChoice
:members:
.. autoclass:: nni.retiarii.nn.pytorch.ChosenInputs
:members:
.. _nas-value-choice:
ValueChoice
^^^^^^^^^^^
.. autoclass:: nni.retiarii.nn.pytorch.ValueChoice
:members:
.. _nas-repeat:
Repeat
^^^^^^
.. autoclass:: nni.retiarii.nn.pytorch.Repeat
:members:
.. _nas-cell:
Cell
^^^^
.. autoclass:: nni.retiarii.nn.pytorch.Cell
:members:
.. footbibliography::
.. _nas-cell-101:
NasBench101Cell
^^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.nn.pytorch.NasBench101Cell
:members:
.. footbibliography::
.. _nas-cell-201:
NasBench201Cell
^^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.nn.pytorch.NasBench201Cell
:members:
.. footbibliography::
.. _hyper-modules:
Hyper-module Library (experimental)
-----------------------------------
Hyper-module is a (PyTorch) module which contains many architecture/hyperparameter candidates for this module. By using hypermodule in user defined model, NNI will help users automatically find the best architecture/hyperparameter of the hyper-modules for this model. This follows the design philosophy of Retiarii that users write DNN model as a space.
We are planning to support some of the hyper-modules commonly used in the community, such as AutoDropout, AutoActivation. These are considered complementary to :ref:`mutation-primitives`, as they are often more concrete, specific, and tailored for particular needs.
.. _nas-autoactivation:
AutoActivation
^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.nn.pytorch.AutoActivation
:members:
Mutators (advanced)
-------------------
Besides the inline mutation APIs demonstrated :ref:`above <mutation-primitives>`, NNI provides a more general approach to express a model space, i.e., *Mutator*, to cover more complex model spaces. Those inline mutation APIs are also implemented with mutator in the underlying system, which can be seen as a special case of model mutation. Please read :doc:`./mutator` for details.
Customize a New One-shot Trainer Customize Exploration Strategy
================================ ==============================
One-shot trainers should inherit ``nni.retiarii.oneshot.BaseOneShotTrainer``, and need to implement ``fit()`` (used to conduct the fitting and searching process) and ``export()`` method (used to return the searched best architecture). Customize Multi-trial Strategy
------------------------------
If users want to innovate a new exploration strategy, they can easily customize a new one following the interface provided by NNI. Specifically, users should inherit the base strategy class ``BaseStrategy``, then implement the member function ``run``. This member function takes ``base_model`` and ``applied_mutators`` as its input arguments. It can simply apply the user specified mutators in ``applied_mutators`` onto ``base_model`` to generate a new model. When a mutator is applied, it should be bound with a sampler (e.g., ``RandomSampler``). Every sampler implements the ``choice`` function which chooses value(s) from candidate values. The ``choice`` functions invoked in mutators are executed with the sampler.
Below is a very simple random strategy, which makes the choices completely random.
.. code-block:: python
from nni.retiarii import Sampler
class RandomSampler(Sampler):
def choice(self, candidates, mutator, model, index):
return random.choice(candidates)
class RandomStrategy(BaseStrategy):
def __init__(self):
self.random_sampler = RandomSampler()
def run(self, base_model, applied_mutators):
_logger.info('stargety start...')
while True:
avail_resource = query_available_resources()
if avail_resource > 0:
model = base_model
_logger.info('apply mutators...')
_logger.info('mutators: %s', str(applied_mutators))
for mutator in applied_mutators:
mutator.bind_sampler(self.random_sampler)
model = mutator.apply(model)
# run models
submit_models(model)
else:
time.sleep(2)
You can find that this strategy does not know the search space beforehand, it passively makes decisions every time ``choice`` is invoked from mutators. If a strategy wants to know the whole search space before making any decision (e.g., TPE, SMAC), it can use ``dry_run`` function provided by ``Mutator`` to obtain the space. An example strategy can be found :githublink:`here <nni/retiarii/strategy/tpe_strategy.py>`.
After generating a new model, the strategy can use our provided APIs (e.g., ``submit_models``, ``is_stopped_exec``) to submit the model and get its reported results.
References
^^^^^^^^^^
.. autoclass:: nni.retiarii.Sampler
:members:
:noindex:
.. autoclass:: nni.retiarii.strategy.BaseStrategy
:members:
:noindex:
Customize a New One-shot Trainer (legacy)
-----------------------------------------
One-shot trainers should inherit :class:`nni.retiarii.oneshot.BaseOneShotTrainer`, and need to implement ``fit()`` (used to conduct the fitting and searching process) and ``export()`` method (used to return the searched best architecture).
Writing a one-shot trainer is very different to single-arch evaluator. First of all, there are no more restrictions on init method arguments, any Python arguments are acceptable. Secondly, the model fed into one-shot trainers might be a model with Retiarii-specific modules, such as LayerChoice and InputChoice. Such model cannot directly forward-propagate and trainers need to decide how to handle those modules. Writing a one-shot trainer is very different to single-arch evaluator. First of all, there are no more restrictions on init method arguments, any Python arguments are acceptable. Secondly, the model fed into one-shot trainers might be a model with Retiarii-specific modules, such as LayerChoice and InputChoice. Such model cannot directly forward-propagate and trainers need to decide how to handle those modules.
...@@ -54,3 +107,10 @@ A typical example is DartsTrainer, where learnable-parameters are used to combin ...@@ -54,3 +107,10 @@ A typical example is DartsTrainer, where learnable-parameters are used to combin
return result return result
The full code of DartsTrainer is available to Retiarii source code. Please have a check at :githublink:`DartsTrainer <nni/retiarii/oneshot/pytorch/darts.py>`. The full code of DartsTrainer is available to Retiarii source code. Please have a check at :githublink:`DartsTrainer <nni/retiarii/oneshot/pytorch/darts.py>`.
References
^^^^^^^^^^
.. autoclass:: nni.retiarii.oneshot.BaseOneShotTrainer
:members:
:noindex:
...@@ -3,10 +3,12 @@ Model Evaluators ...@@ -3,10 +3,12 @@ Model Evaluators
A model evaluator is for training and validating each generated model. They are necessary to evaluate the performance of new explored models. A model evaluator is for training and validating each generated model. They are necessary to evaluate the performance of new explored models.
.. _functional-evaluator:
Customize Evaluator with Any Function Customize Evaluator with Any Function
------------------------------------- -------------------------------------
The simplest way to customize a new evaluator is with functional APIs, which is very easy when training code is already available. Users only need to write a fit function that wraps everything, which usually includes training, validating and testing of a single model. This function takes one positional arguments (``model_cls``) and possible keyword arguments. The keyword arguments (other than ``model_cls``) are fed to FunctionEvaluator as its initialization parameters (note that they will be `serialized <./Serialization.rst>`__). In this way, users get everything under their control, but expose less information to the framework and as a result, further optimizations like `CGO <./ExecutionEngines.rst#cgo-execution-engine-experimental>`__ might be not feasible. An example is as belows: The simplest way to customize a new evaluator is with functional APIs, which is very easy when training code is already available. Users only need to write a fit function that wraps everything, which usually includes training, validating and testing of a single model. This function takes one positional arguments (``model_cls``) and possible keyword arguments. The keyword arguments (other than ``model_cls``) are fed to FunctionEvaluator as its initialization parameters (note that they will be :doc:`serialized <./serialization>`). In this way, users get everything under their control, but expose less information to the framework and as a result, further optimizations like :ref:`CGO <cgo-execution-engine>` might be not feasible. An example is as belows:
.. code-block:: python .. code-block:: python
...@@ -69,12 +71,6 @@ For example, ...@@ -69,12 +71,6 @@ For example,
val_dataloaders=pl.DataLoader(test_dataset, batch_size=100), val_dataloaders=pl.DataLoader(test_dataset, batch_size=100),
max_epochs=10) max_epochs=10)
.. autoclass:: nni.retiarii.evaluator.pytorch.lightning.Classification
:noindex:
.. autoclass:: nni.retiarii.evaluator.pytorch.lightning.Regression
:noindex:
Customize Evaluator with PyTorch-Lightning Customize Evaluator with PyTorch-Lightning
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...@@ -145,3 +141,39 @@ Then, users need to wrap everything (including LightningModule, trainer and data ...@@ -145,3 +141,39 @@ Then, users need to wrap everything (including LightningModule, trainer and data
train_dataloader=pl.DataLoader(train_dataset, batch_size=100), train_dataloader=pl.DataLoader(train_dataset, batch_size=100),
val_dataloaders=pl.DataLoader(test_dataset, batch_size=100)) val_dataloaders=pl.DataLoader(test_dataset, batch_size=100))
experiment = RetiariiExperiment(base_model, lightning, mutators, strategy) experiment = RetiariiExperiment(base_model, lightning, mutators, strategy)
References
----------
FunctionalEvaluator
^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.evaluator.FunctionalEvaluator
:members:
:noindex:
.. _classification-evaluator:
Classification
^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.evaluator.pytorch.lightning.Classification
:members:
:noindex:
.. _regression-evaluator:
Regression
^^^^^^^^^^
.. autoclass:: nni.retiarii.evaluator.pytorch.lightning.Regression
:members:
:noindex:
Lightning
^^^^^^^^^
.. autoclass:: nni.retiarii.evaluator.pytorch.lightning.Lightning
:members:
:noindex:
Execution Engines Execution Engines
================= =================
Execution engine is for running Retiarii Experiment. NNI supports three execution engines, users can choose a speicific engine according to the type of their model mutation definition and their requirements for cross-model optimizations. Execution engine is for running Retiarii Experiment. NNI supports three execution engines, users can choose a specific engine according to the type of their model mutation definition and their requirements for cross-model optimizations.
* **Pure-python execution engine** is the default engine, it supports the model space expressed by `inline mutation API <./MutationPrimitives.rst>`__. * **Pure-python execution engine** is the default engine, it supports the model space expressed by `inline mutation API <./MutationPrimitives.rst>`__.
...@@ -53,10 +53,12 @@ Three steps are need to use graph-based execution engine. ...@@ -53,10 +53,12 @@ Three steps are need to use graph-based execution engine.
For exporting top models, graph-based execution engine supports exporting source code for top models by running ``exp.export_top_models(formatter='code')``. For exporting top models, graph-based execution engine supports exporting source code for top models by running ``exp.export_top_models(formatter='code')``.
.. _cgo-execution-engine:
CGO Execution Engine (experimental) CGO Execution Engine (experimental)
----------------------------------- -----------------------------------
CGOCross-Graph Optimization) execution engine does cross-model optimizations based on the graph-based execution engine. In CGO execution engine, multiple models could be merged and trained together in one trial. CGO (Cross-Graph Optimization) execution engine does cross-model optimizations based on the graph-based execution engine. In CGO execution engine, multiple models could be merged and trained together in one trial.
Currently, it only supports ``DedupInputOptimizer`` that can merge graphs sharing the same dataset to only loading and pre-processing each batch of data once, which can avoid bottleneck on data loading. Currently, it only supports ``DedupInputOptimizer`` that can merge graphs sharing the same dataset to only loading and pre-processing each batch of data once, which can avoid bottleneck on data loading.
.. note :: To use CGO engine, PyTorch-lightning above version 1.4.2 is required. .. note :: To use CGO engine, PyTorch-lightning above version 1.4.2 is required.
...@@ -67,7 +69,7 @@ To enable CGO execution engine, you need to follow these steps: ...@@ -67,7 +69,7 @@ To enable CGO execution engine, you need to follow these steps:
2. Add configurations for remote training service 2. Add configurations for remote training service
3. Add configurations for CGO engine 3. Add configurations for CGO engine
.. code-block:: python .. code-block:: python
exp = RetiariiExperiment(base_model, trainer, mutators, strategy) exp = RetiariiExperiment(base_model, trainer, mutators, strategy)
config = RetiariiExeConfig('remote') config = RetiariiExeConfig('remote')
...@@ -103,4 +105,16 @@ We have already implemented two trainers: :class:`nni.retiarii.evaluator.pytorch ...@@ -103,4 +105,16 @@ We have already implemented two trainers: :class:`nni.retiarii.evaluator.pytorch
Advanced users can also implement their own trainers by inheriting ``MultiModelSupervisedLearningModule``. Advanced users can also implement their own trainers by inheriting ``MultiModelSupervisedLearningModule``.
Sometimes, a mutated model cannot be executed (e.g., due to shape mismatch). When a trial running multiple models contains Sometimes, a mutated model cannot be executed (e.g., due to shape mismatch). When a trial running multiple models contains
a bad model, CGO execution engine will re-run each model independently in seperate trials without cross-model optimizations. a bad model, CGO execution engine will re-run each model independently in separate trials without cross-model optimizations.
References
^^^^^^^^^^
.. autoclass:: nni.retiarii.evaluator.pytorch.cgo.evaluator.MultiModelSupervisedLearningModule
:members:
.. autoclass:: nni.retiarii.evaluator.pytorch.cgo.evaluator.Classification
:members:
.. autoclass:: nni.retiarii.evaluator.pytorch.cgo.evaluator.Regression
:members:
Exploration Strategy
====================
There are two types of model space exploration approach: **Multi-trial NAS** and **One-shot NAS**. Mutli-trial NAS trains each sampled model in the model space independently, while One-shot NAS samples the model from a super model. After constructing the model space, users can use either exploration approach to explore the model space.
.. _multi-trial-nas:
Multi-trial strategy
--------------------
Multi-trial NAS means each sampled model from model space is trained independently. A typical multi-trial NAS is `NASNet <https://arxiv.org/abs/1707.07012>`__. In multi-trial NAS, users need model evaluator to evaluate the performance of each sampled model, and need an exploration strategy to sample models from a defined model space. Here, users could use NNI provided model evaluators or write their own model evalutor. They can simply choose a exploration strategy. Advanced users can also customize new exploration strategy.
To use an exploration strategy, users simply instantiate an exploration strategy and pass the instantiated object to :class:`nni.retiarii.nn.pytorch.RetiariiExperiment`. Below is a simple example.
.. code-block:: python
import nni.retiarii.strategy as strategy
exploration_strategy = strategy.Random(dedup=True)
Rather than using ``strategy.Random``, users can choose one of the strategies from below.
.. _random-strategy:
Random
^^^^^^
.. autoclass:: nni.retiarii.strategy.Random
:members:
:noindex:
.. _grid-search-strategy:
GridSearch
^^^^^^^^^^
.. autoclass:: nni.retiarii.strategy.GridSearch
:members:
:noindex:
.. _regularized-evolution-strategy:
RegularizedEvolution
^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.strategy.RegularizedEvolution
:members:
:noindex:
.. _tpe-strategy:
TPE
^^^
.. autoclass:: nni.retiarii.strategy.TPE
:members:
:noindex:
.. footbibliography::
.. _policy-based-rl-strategy:
PolicyBasedRL
^^^^^^^^^^^^^
.. autoclass:: nni.retiarii.strategy.PolicyBasedRL
:members:
:noindex:
.. footbibliography::
.. _one-shot-nas:
One-shot strategy
-----------------
One-shot NAS algorithms leverage weight sharing among models in neural architecture search space to train a supernet, and use this supernet to guide the selection of better models. This type of algorihtms greatly reduces computational resource compared to independently training each model from scratch (which we call "Multi-trial NAS"). NNI has supported many popular One-shot NAS algorithms as following.
.. _darts-strategy:
DARTS
^^^^^
The paper `DARTS: Differentiable Architecture Search <https://arxiv.org/abs/1806.09055>`__ addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Their method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent.
Authors' code optimizes the network weights and architecture weights alternatively in mini-batches. They further explore the possibility that uses second order optimization (unroll) instead of first order, to improve the performance.
Implementation on NNI is based on the `official implementation <https://github.com/quark0/darts>`__ and a `popular 3rd-party repo <https://github.com/khanrc/pt.darts>`__. DARTS on NNI is designed to be general for arbitrary search space. A CNN search space tailored for CIFAR10, same as the original paper, is implemented as a use case of DARTS.
.. autoclass:: nni.retiarii.oneshot.pytorch.DartsTrainer
:noindex:
Reproduction Results
""""""""""""""""""""
The above-mentioned example is meant to reproduce the results in the paper, we do experiments with first and second order optimization. Due to the time limit, we retrain *only the best architecture* derived from the search phase and we repeat the experiment *only once*. Our results is currently on par with the results reported in paper. We will add more results later when ready.
.. list-table::
:header-rows: 1
:widths: auto
* -
- In paper
- Reproduction
* - First order (CIFAR10)
- 3.00 +/- 0.14
- 2.78
* - Second order (CIFAR10)
- 2.76 +/- 0.09
- 2.80
Examples
""""""""
:githublink:`Example code <examples/nas/oneshot/darts>`
.. code-block:: bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
git clone https://github.com/Microsoft/nni.git
# search the best architecture
cd examples/nas/oneshot/darts
python3 search.py
# train the best architecture
python3 retrain.py --arc-checkpoint ./checkpoints/epoch_49.json
Limitations
"""""""""""
* DARTS doesn't support DataParallel and needs to be customized in order to support DistributedDataParallel.
.. _enas-strategy:
ENAS
^^^^
The paper `Efficient Neural Architecture Search via Parameter Sharing <https://arxiv.org/abs/1802.03268>`__ uses parameter sharing between child models to accelerate the NAS process. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss.
Implementation on NNI is based on the `official implementation in Tensorflow <https://github.com/melodyguan/enas>`__, including a general-purpose Reinforcement-learning controller and a trainer that trains target network and this controller alternatively. Following paper, we have also implemented macro and micro search space on CIFAR10 to demonstrate how to use these trainers. Since code to train from scratch on NNI is not ready yet, reproduction results are currently unavailable.
.. autoclass:: nni.retiarii.oneshot.pytorch.EnasTrainer
:noindex:
Examples
""""""""
:githublink:`Example code <examples/nas/oneshot/enas>`
.. code-block:: bash
# In case NNI code is not cloned. If the code is cloned already, ignore this line and enter code folder.
git clone https://github.com/Microsoft/nni.git
# search the best architecture
cd examples/nas/oneshot/enas
# search in macro search space
python3 search.py --search-for macro
# search in micro search space
python3 search.py --search-for micro
# view more options for search
python3 search.py -h
.. _fbnet-strategy:
FBNet
^^^^^
.. note:: This one-shot NAS is still implemented under NNI NAS 1.0, and will `be migrated to Retiarii framework in v2.4 <https://github.com/microsoft/nni/issues/3814>`__.
For the mobile application of facial landmark, based on the basic architecture of PFLD model, we have applied the FBNet (Block-wise DNAS) to design an concise model with the trade-off between latency and accuracy. References are listed as below:
* `FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search <https://arxiv.org/abs/1812.03443>`__
* `PFLD: A Practical Facial Landmark Detector <https://arxiv.org/abs/1902.10859>`__
FBNet is a block-wise differentiable NAS method (Block-wise DNAS), where the best candidate building blocks can be chosen by using Gumbel Softmax random sampling and differentiable training. At each layer (or stage) to be searched, the diverse candidate blocks are side by side planned (just like the effectiveness of structural re-parameterization), leading to sufficient pre-training of the supernet. The pre-trained supernet is further sampled for finetuning of the subnet, to achieve better performance.
.. image:: ../../img/fbnet.png
PFLD is a lightweight facial landmark model for realtime application. The architecture of PLFD is firstly simplified for acceleration, by using the stem block of PeleeNet, average pooling with depthwise convolution and eSE module.
To achieve better trade-off between latency and accuracy, the FBNet is further applied on the simplified PFLD for searching the best block at each specific layer. The search space is based on the FBNet space, and optimized for mobile deployment by using the average pooling with depthwise convolution and eSE module etc.
Experiments
"""""""""""
To verify the effectiveness of FBNet applied on PFLD, we choose the open source dataset with 106 landmark points as the benchmark:
* `Grand Challenge of 106-Point Facial Landmark Localization <https://arxiv.org/abs/1905.03469>`__
The baseline model is denoted as MobileNet-V3 PFLD (`Reference baseline <https://github.com/Hsintao/pfld_106_face_landmarks>`__), and the searched model is denoted as Subnet. The experimental results are listed as below, where the latency is tested on Qualcomm 625 CPU (ARMv8):
.. list-table::
:header-rows: 1
:widths: auto
* - Model
- Size
- Latency
- Validation NME
* - MobileNet-V3 PFLD
- 1.01MB
- 10ms
- 6.22%
* - Subnet
- 693KB
- 1.60ms
- 5.58%
Example
"""""""
`Example code <https://github.com/microsoft/nni/tree/master/examples/nas/oneshot/pfld>`__
Please run the following scripts at the example directory.
The Python dependencies used here are listed as below:
.. code-block:: bash
numpy==1.18.5
opencv-python==4.5.1.48
torch==1.6.0
torchvision==0.7.0
onnx==1.8.1
onnx-simplifier==0.3.5
onnxruntime==1.7.0
To run the tutorial, follow the steps below:
1. **Data Preparation**: Firstly, you should download the dataset `106points dataset <https://drive.google.com/file/d/1I7QdnLxAlyG2Tq3L66QYzGhiBEoVfzKo/view?usp=sharing>`__ to the path ``./data/106points`` . The dataset includes the train-set and test-set:
.. code-block:: bash
./data/106points/train_data/imgs
./data/106points/train_data/list.txt
./data/106points/test_data/imgs
./data/106points/test_data/list.txt
2. **Search**: Based on the architecture of simplified PFLD, the setting of multi-stage search space and hyper-parameters for searching should be firstly configured to construct the supernet. For example,
.. code-block:: bash
from lib.builder import search_space
from lib.ops import PRIMITIVES
from lib.supernet import PFLDInference, AuxiliaryNet
from nni.algorithms.nas.pytorch.fbnet import LookUpTable, NASConfig,
# configuration of hyper-parameters
# search_space defines the multi-stage search space
nas_config = NASConfig(
model_dir="./ckpt_save",
nas_lr=0.01,
mode="mul",
alpha=0.25,
beta=0.6,
search_space=search_space,
)
# lookup table to manage the information
lookup_table = LookUpTable(config=nas_config, primitives=PRIMITIVES)
# created supernet
pfld_backbone = PFLDInference(lookup_table)
After creation of the supernet with the specification of search space and hyper-parameters, we can run below command to start searching and training of the supernet:
.. code-block:: bash
python train.py --dev_id "0,1" --snapshot "./ckpt_save" --data_root "./data/106points"
The validation accuracy will be shown during training, and the model with best accuracy will be saved as ``./ckpt_save/supernet/checkpoint_best.pth``.
3. **Finetune**: After pre-training of the supernet, we can run below command to sample the subnet and conduct the finetuning:
.. code-block:: bash
python retrain.py --dev_id "0,1" --snapshot "./ckpt_save" --data_root "./data/106points" \
--supernet "./ckpt_save/supernet/checkpoint_best.pth"
The validation accuracy will be shown during training, and the model with best accuracy will be saved as ``./ckpt_save/subnet/checkpoint_best.pth``.
4. **Export**: After the finetuning of subnet, we can run below command to export the ONNX model:
.. code-block:: bash
python export.py --supernet "./ckpt_save/supernet/checkpoint_best.pth" \
--resume "./ckpt_save/subnet/checkpoint_best.pth"
ONNX model is saved as ``./output/subnet.onnx``, which can be further converted to the mobile inference engine by using `MNN <https://github.com/alibaba/MNN>`__ .
The checkpoints of pre-trained supernet and subnet are offered as below:
* `Supernet <https://drive.google.com/file/d/1TCuWKq8u4_BQ84BWbHSCZ45N3JGB9kFJ/view?usp=sharing>`__
* `Subnet <https://drive.google.com/file/d/160rkuwB7y7qlBZNM3W_T53cb6MQIYHIE/view?usp=sharing>`__
* `ONNX model <https://drive.google.com/file/d/1s-v-aOiMv0cqBspPVF3vSGujTbn_T_Uo/view?usp=sharing>`__
.. _spos-strategy:
SPOS
^^^^
Proposed in `Single Path One-Shot Neural Architecture Search with Uniform Sampling <https://arxiv.org/abs/1904.00420>`__ is a one-shot NAS method that addresses the difficulties in training One-Shot NAS models by constructing a simplified supernet trained with an uniform path sampling method, so that all underlying architectures (and their weights) get trained fully and equally. An evolutionary algorithm is then applied to efficiently search for the best-performing architectures without any fine tuning.
Implementation on NNI is based on `official repo <https://github.com/megvii-model/SinglePathOneShot>`__. We implement a trainer that trains the supernet and a evolution tuner that leverages the power of NNI framework that speeds up the evolutionary search phase.
.. autoclass:: nni.retiarii.oneshot.pytorch.SinglePathTrainer
:noindex:
Examples
""""""""
Here is a use case, which is the search space in paper. However, we applied latency limit instead of flops limit to perform the architecture search phase.
:githublink:`Example code <examples/nas/oneshot/spos>`
**Requirements:** Prepare ImageNet in the standard format (follow the script `here <https://gist.github.com/BIGBALLON/8a71d225eff18d88e469e6ea9b39cef4>`__). Linking it to ``data/imagenet`` will be more convenient. Download the checkpoint file from `here <https://1drv.ms/u/s!Am_mmG2-KsrnajesvSdfsq_cN48?e=aHVppN>`__ (maintained by `Megvii <https://github.com/megvii-model>`__) if you don't want to retrain the supernet. Put ``checkpoint-150000.pth.tar`` under ``data`` directory. After preparation, it's expected to have the following code structure:
.. code-block:: bash
spos
├── architecture_final.json
├── blocks.py
├── data
│ ├── imagenet
│ │ ├── train
│ │ └── val
│ └── checkpoint-150000.pth.tar
├── network.py
├── readme.md
├── supernet.py
├── evaluation.py
├── search.py
└── utils.py
Then follow the 3 steps:
1. **Train Supernet**:
.. code-block:: bash
python supernet.py
This will export the checkpoint to ``checkpoints`` directory, for the next step.
.. note:: The data loading used in the official repo is `slightly different from usual <https://github.com/megvii-model/SinglePathOneShot/issues/5>`__, as they use BGR tensor and keep the values between 0 and 255 intentionally to align with their own DL framework. The option ``--spos-preprocessing`` will simulate the behavior used originally and enable you to use the checkpoints pretrained.
2. **Evolution Search**: Single Path One-Shot leverages evolution algorithm to search for the best architecture. In the paper, the search module, which is responsible for testing the sampled architecture, recalculates all the batch norm for a subset of training images, and evaluates the architecture on the full validation set. In this example, we have an incomplete implementation of the evolution search. The example only support training from scratch. Inheriting weights from pretrained supernet is not supported yet. To search with the regularized evolution strategy, run
.. code-block:: bash
python search.py
The final architecture exported from every epoch of evolution can be found in ``trials`` under the working directory of your tuner, which, by default, is ``$HOME/nni-experiments/your_experiment_id/trials``.
3. **Train for Evaluation**:
.. code-block:: bash
python evaluation.py
By default, it will use ``architecture_final.json``. This architecture is provided by the official repo (converted into NNI format). You can use any architecture (e.g., the architecture found in step 2) with ``--fixed-arc`` option.
Known Limitations
"""""""""""""""""
* Block search only. Channel search is not supported yet.
* In the search phase, training from the scratch is required. Inheriting weights from supernet is not supported yet.
Current Reproduction Results
""""""""""""""""""""""""""""
Reproduction is still undergoing. Due to the gap between official release and original paper, we compare our current results with official repo (our run) and paper.
* Evolution phase is almost aligned with official repo. Our evolution algorithm shows a converging trend and reaches ~65% accuracy at the end of search. Nevertheless, this result is not on par with paper. For details, please refer to `this issue <https://github.com/megvii-model/SinglePathOneShot/issues/6>`__.
* Retrain phase is not aligned. Our retraining code, which uses the architecture released by the authors, reaches 72.14% accuracy, still having a gap towards 73.61% by official release and 74.3% reported in original paper.
.. _proxylessnas-strategy:
ProxylessNAS
^^^^^^^^^^^^
The paper `ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware <https://arxiv.org/pdf/1812.00332.pdf>`__ removes proxy, it directly learns the architectures for large-scale target tasks and target hardware platforms. They address high memory consumption issue of differentiable NAS and reduce the computational cost to the same level of regular training while still allowing a large candidate set. Please refer to the paper for the details.
.. autoclass:: nni.retiarii.oneshot.pytorch.ProxylessTrainer
:noindex:
Usage
"""""
To use ProxylessNAS training/searching approach, users need to specify search space in their model using :doc:`NNI NAS interface <./construct_space>`, e.g., ``LayerChoice``, ``InputChoice``. After defining and instantiating the model, the following work can be leaved to ProxylessNasTrainer by instantiating the trainer and passing the model to it.
.. code-block:: python
trainer = ProxylessTrainer(model,
loss=LabelSmoothingLoss(),
dataset=None,
optimizer=optimizer,
metrics=lambda output, target: accuracy(output, target, topk=(1, 5,)),
num_epochs=120,
log_frequency=10,
grad_reg_loss_type=args.grad_reg_loss_type,
grad_reg_loss_params=grad_reg_loss_params,
applied_hardware=args.applied_hardware, dummy_input=(1, 3, 224, 224),
ref_latency=args.reference_latency)
trainer.train()
trainer.export(args.arch_path)
The complete example code can be found :githublink:`here <examples/nas/oneshot/proxylessnas>`.
Implementation
""""""""""""""
The implementation on NNI is based on the `offical implementation <https://github.com/mit-han-lab/ProxylessNAS>`__. The official implementation supports two training approaches: gradient descent and RL based. In our current implementation on NNI, gradient descent training approach is supported. The complete support of ProxylessNAS is ongoing.
The official implementation supports different targeted hardware, including 'mobile', 'cpu', 'gpu8', 'flops'. In NNI repo, the hardware latency prediction is supported by `Microsoft nn-Meter <https://github.com/microsoft/nn-Meter>`__. nn-Meter is an accurate inference latency predictor for DNN models on diverse edge devices. nn-Meter support four hardwares up to now, including ``cortexA76cpu_tflite21``, ``adreno640gpu_tflite21``, ``adreno630gpu_tflite21``, and ``myriadvpu_openvino2019r2``. Users can find more information about nn-Meter on its website. More hardware will be supported in the future. Users could find more details about applying ``nn-Meter`` `here <./HardwareAwareNAS.rst>`__ .
Below we will describe implementation details. Like other one-shot NAS algorithms on NNI, ProxylessNAS is composed of two parts: *search space* and *training approach*. For users to flexibly define their own search space and use built-in ProxylessNAS training approach, please refer to :githublink:`example code <examples/nas/oneshot/proxylessnas>` for a reference.
.. image:: ../../img/proxylessnas.png
ProxylessNAS training approach is composed of ProxylessLayerChoice and ProxylessNasTrainer. ProxylessLayerChoice instantiates MixedOp for each mutable (i.e., LayerChoice), and manage architecture weights in MixedOp. **For DataParallel**, architecture weights should be included in user model. Specifically, in ProxylessNAS implementation, we add MixedOp to the corresponding mutable (i.e., LayerChoice) as a member variable. The ProxylessLayerChoice class also exposes two member functions, i.e., ``resample``, ``finalize_grad``, for the trainer to control the training of architecture weights.
Reproduction Results
""""""""""""""""""""
To reproduce the result, we first run the search, we found that though it runs many epochs the chosen architecture converges at the first several epochs. This is probably induced by hyper-parameters or the implementation, we are working on it.
Hardware-aware NAS Hardware-aware NAS
================== ==================
.. contents:: .. This file should be rewritten as a tutorial
End-to-end Multi-trial SPOS Demo End-to-end Multi-trial SPOS Demo
-------------------------------- --------------------------------
...@@ -61,14 +61,14 @@ To run the one-shot ProxylessNAS demo, first install nn-Meter by running: ...@@ -61,14 +61,14 @@ To run the one-shot ProxylessNAS demo, first install nn-Meter by running:
Then run one-shot ProxylessNAS demo: Then run one-shot ProxylessNAS demo:
```bash .. code-block:: bash
python ${NNI_ROOT}/examples/nas/oneshot/proxylessnas/main.py --applied_hardware <hardware> --reference_latency <reference latency (ms)>
``` python ${NNI_ROOT}/examples/nas/oneshot/proxylessnas/main.py --applied_hardware <hardware> --reference_latency <reference latency (ms)>
How the demo works How the demo works
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
In the implementation of ProxylessNAS ``trainer``, we provide a ``HardwareLatencyEstimator`` which currently builds a lookup table, that stores the measured latency of each candidate building block in the search space. The latency sum of all building blocks in a candidate model will be treated as the model inference latency. The latency prediction is obtained by ``nn-Meter``. ``HardwareLatencyEstimator`` predicts expected latency for the mixed operation based on the path weight of `ProxylessLayerChoice`. With leveraging ``nn-Meter`` in NNI, users can apply ProxylessNAS to search efficient DNN models on more types of edge devices. In the implementation of ProxylessNAS ``trainer``, we provide a ``HardwareLatencyEstimator`` which currently builds a lookup table, that stores the measured latency of each candidate building block in the search space. The latency sum of all building blocks in a candidate model will be treated as the model inference latency. The latency prediction is obtained by ``nn-Meter``. ``HardwareLatencyEstimator`` predicts expected latency for the mixed operation based on the path weight of ``ProxylessLayerChoice``. With leveraging ``nn-Meter`` in NNI, users can apply ProxylessNAS to search efficient DNN models on more types of edge devices.
Despite of ``applied_hardware`` and ``reference_latency``, There are some other parameters related to hardware-aware ProxylessNAS training in this :githublink:`example <examples/nas/oneshot/proxylessnas/main.py>`: Despite of ``applied_hardware`` and ``reference_latency``, There are some other parameters related to hardware-aware ProxylessNAS training in this :githublink:`example <examples/nas/oneshot/proxylessnas/main.py>`:
......
Retiarii for Neural Architecture Search
=======================================
.. toctree::
:hidden:
:titlesonly:
Quick Start <../tutorials/hello_nas>
construct_space
exploration_strategy
evaluator
advanced_usage
reference
.. attention:: NNI's latest NAS supports are all based on Retiarii Framework, users who are still on `early version using NNI NAS v1.0 <https://nni.readthedocs.io/en/v2.2/nas.html>`__ shall migrate your work to Retiarii as soon as possible.
.. Using rubric to prevent the section heading to be include into toc
.. rubric:: Motivation
Automatic neural architecture search is playing an increasingly important role in finding better models. Recent research has proven the feasibility of automatic NAS and has led to models that beat many manually designed and tuned models. Representative works include `NASNet <https://arxiv.org/abs/1707.07012>`__, `ENAS <https://arxiv.org/abs/1802.03268>`__, `DARTS <https://arxiv.org/abs/1806.09055>`__, `Network Morphism <https://arxiv.org/abs/1806.10282>`__, and `Evolution <https://arxiv.org/abs/1703.01041>`__. In addition, new innovations continue to emerge.
However, it is pretty hard to use existing NAS work to help develop common DNN models. Therefore, we designed `Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__, a novel NAS/HPO framework, and implemented it in NNI. It helps users easily construct a model space (or search space, tuning space), and utilize existing NAS algorithms. The framework also facilitates NAS innovation and is used to design new NAS algorithms.
In summary, we highlight the following features for Retiarii:
* Simple APIs are provided for defining model search space within PyTorch/TensorFlow model.
* SOTA NAS algorithms are built-in to be used for exploring model search space.
* System-level optimizations are implemented for speeding up the exploration.
.. rubric:: Overview
High-level speaking, aiming to solve any particular task with neural architecture search typically requires: search space design, search strategy selection, and performance evaluation. The three components work together with the following loop (the figure is from the famous `NAS survey <https://arxiv.org/abs/1808.05377>`__):
.. image:: ../../img/nas_abstract_illustration.png
To be consistent, we will use the following terminologies throughout our documentation:
* *Model search space*: it means a set of models from which the best model is explored/searched. Sometimes we use *search space* or *model space* in short.
* *Exploration strategy*: the algorithm that is used to explore a model search space. Sometimes we also call it *search strategy*.
* *Model evaluator*: it is used to train a model and evaluate the model's performance.
Concretely, an exploration strategy selects an architecture from a predefined search space. The architecture is passed to a performance evaluation to get a score, which represents how well this architecture performs on a particular task. This process is repeated until the search process is able to find the best architecture.
During such process, we list out the core engineering challenges (which are also pointed out by the famous `NAS survey <https://arxiv.org/abs/1808.05377>`__) and the solutions NNI has provided to address them:
* **Search space design:** The search space defines which architectures can be represented in principle. Incorporating prior knowledge about typical properties of architectures well-suited for a task can reduce the size of the search space and simplify the search. However, this also introduces a human bias, which may prevent finding novel architectural building blocks that go beyond the current human knowledge. In NNI, we provide a wide range of APIs to build the search space. There are :doc:`high-level APIs <construct_space>`, that enables incorporating human knowledge about what makes a good architecture or search space. There are also :doc:`low-level APIs <mutator>`, that is a list of primitives to construct a network from operator to operator.
* **Exploration strategy:** The exploration strategy details how to explore the search space (which is often exponentially large). It encompasses the classical exploration-exploitation trade-off since, on the one hand, it is desirable to find well-performing architectures quickly, while on the other hand, premature convergence to a region of suboptimal architectures should be avoided. In NNI, we have also provided :doc:`a list of strategies <exploration_strategy>`. Some of them are powerful, but time consuming, while others might be suboptimal but really efficient. Users can always find one that matches their need.
* **Performance estimation / evaluator:** The objective of NAS is typically to find architectures that achieve high predictive performance on unseen data. Performance estimation refers to the process of estimating this performance. In NNI, this process is implemented with :doc:`evaluator <evaluator>`, which is responsible of estimating a model's performance. The choices of evaluators also range from the simplest option, e.g., to perform a standard training and validation of the architecture on data, to complex configurations and implementations.
.. rubric:: Writing Model Space
The following APIs are provided to ease the engineering effort of writing a new search space.
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Category
- Brief Description
* - :ref:`nas-layer-choice`
- :ref:`Multi-trial <multi-trial-nas>`
- Select from some PyTorch modules
* - :ref:`nas-input-choice`
- :ref:`Multi-trial <multi-trial-nas>`
- Select from some inputs (tensors)
* - :ref:`nas-value-choice`
- :ref:`Multi-trial <multi-trial-nas>`
- Select from some candidate values
* - :ref:`nas-repeat`
- :ref:`Multi-trial <multi-trial-nas>`
- Repeat a block by a variable number of times
* - :ref:`nas-cell`
- :ref:`Multi-trial <multi-trial-nas>`
- Cell structure popularly used in literature
* - :ref:`nas-cell-101`
- :ref:`Multi-trial <multi-trial-nas>`
- Cell structure (variant) proposed by NAS-Bench-101
* - :ref:`nas-cell-201`
- :ref:`Multi-trial <multi-trial-nas>`
- Cell structure (variant) proposed by NAS-Bench-201
* - :ref:`nas-autoactivation`
- :ref:`Hyper-modules <hyper-modules>`
- Searching for activation functions
* - :doc:`Mutator <mutator>`
- :doc:`mutator`
- Flexible mutations on graphs
.. rubric:: Exploring the Search Space
We provide the following (built-in) algorithms to explore the user-defined search space.
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Category
- Brief Description
* - :ref:`random-strategy`
- :ref:`Multi-trial <multi-trial-nas>`
- Randomly sample an architecture each time
* - :ref:`grid-search-strategy`
- :ref:`Multi-trial <multi-trial-nas>`
- Traverse the search space and try all possibilities
* - :ref:`regularized-evolution-strategy`
- :ref:`Multi-trial <multi-trial-nas>`
- Evolution algorithm for NAS. `Reference <https://arxiv.org/abs/1802.01548>`__
* - :ref:`tpe-strategy`
- :ref:`Multi-trial <multi-trial-nas>`
- Tree-structured Parzen Estimator (TPE). `Reference <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__
* - :ref:`policy-based-rl-strategy`
- :ref:`Multi-trial <multi-trial-nas>`
- Policy-based reinforcement learning, based on implementation of tianshou. `Reference <https://arxiv.org/abs/1611.01578>`__
* - :ref:`darts-strategy`
- :ref:`One-shot <one-shot-nas>`
- Continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. `Reference <https://arxiv.org/abs/1806.09055>`__
* - :ref:`enas-strategy`
- :ref:`One-shot <one-shot-nas>`
- RL controller learns to generate the best network on a super-net. `Reference <https://arxiv.org/abs/1802.03268>`__
* - :ref:`fbnet-strategy`
- :ref:`One-shot <one-shot-nas>`
- Choose the best block by using Gumbel Softmax random sampling and differentiable training. `Reference <https://arxiv.org/abs/1812.03443>`__
* - :ref:`spos-strategy`
- :ref:`One-shot <one-shot-nas>`
- Train a super-net with uniform path sampling. `Reference <https://arxiv.org/abs/1904.00420>`__
* - :ref:`proxylessnas-strategy`
- :ref:`One-shot <one-shot-nas>`
- A low-memory-consuming optimized version of differentiable architecture search. `Reference <https://arxiv.org/abs/1812.00332>`__
.. rubric:: Evaluators
The evaluator APIs can be used to build performance assessment component of your neural architecture search process.
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Type
- Brief Description
* - :ref:`functional-evaluator`
- General
- Evaluate with any Python function
* - :ref:`classification-evaluator`
- Built upon `PyTorch Lightning <https://www.pytorchlightning.ai/>`__
- For classification tasks
* - :ref:`regression-evaluator`
- Built upon `PyTorch Lightning <https://www.pytorchlightning.ai/>`__
- For regression tasks
Express Mutations with Mutators Construct Space with Mutators
=============================== =============================
Besides the inline mutation APIs demonstrated `here <./MutationPrimitives.rst>`__, NNI provides a more general approach to express a model space, i.e., *Mutator*, to cover more complex model spaces. Those inline mutation APIs are also implemented with mutator in the underlying system, which can be seen as a special case of model mutation. Besides the inline mutation APIs demonstrated :ref:`above <mutation-primitives>`, NNI provides a more general approach to express a model space, i.e., *Mutator*, to cover more complex model spaces. Those inline mutation APIs are also implemented with mutator in the underlying system, which can be seen as a special case of model mutation.
.. note:: Mutator and inline mutation APIs cannot be used together. .. note:: Mutator and inline mutation APIs cannot be used together.
...@@ -37,7 +37,7 @@ User-defined mutator should inherit ``Mutator`` class, and implement mutation lo ...@@ -37,7 +37,7 @@ User-defined mutator should inherit ``Mutator`` class, and implement mutation lo
The input of ``mutate`` is graph IR (Intermediate Representation) of the base model (please refer to `here <./ApiReference.rst>`__ for the format and APIs of the IR), users can mutate the graph using the graph's member functions (e.g., ``get_nodes_by_label``, ``update_operation``). The mutation operations can be combined with the API ``self.choice``, in order to express a set of possible mutations. In the above example, the node's operation can be changed to any operation from ``candidate_op_list``. The input of ``mutate`` is graph IR (Intermediate Representation) of the base model (please refer to `here <./ApiReference.rst>`__ for the format and APIs of the IR), users can mutate the graph using the graph's member functions (e.g., ``get_nodes_by_label``, ``update_operation``). The mutation operations can be combined with the API ``self.choice``, in order to express a set of possible mutations. In the above example, the node's operation can be changed to any operation from ``candidate_op_list``.
Use placehoder to make mutation easier: ``nn.Placeholder``. If you want to mutate a subgraph or node of your model, you can define a placeholder in this model to represent the subgraph or node. Then, use mutator to mutate this placeholder to make it real modules. Use placeholder to make mutation easier: ``nn.Placeholder``. If you want to mutate a subgraph or node of your model, you can define a placeholder in this model to represent the subgraph or node. Then, use mutator to mutate this placeholder to make it real modules.
.. code-block:: python .. code-block:: python
...@@ -51,7 +51,7 @@ Use placehoder to make mutation easier: ``nn.Placeholder``. If you want to mutat ...@@ -51,7 +51,7 @@ Use placehoder to make mutation easier: ``nn.Placeholder``. If you want to mutat
``label`` is used by mutator to identify this placeholder. The other parameters are the information that is required by mutator. They can be accessed from ``node.operation.parameters`` as a dict, it could include any information that users want to put to pass it to user defined mutator. The complete example code can be found in :githublink:`Mnasnet base model <examples/nas/multi-trial/mnasnet/base_mnasnet.py>`. ``label`` is used by mutator to identify this placeholder. The other parameters are the information that is required by mutator. They can be accessed from ``node.operation.parameters`` as a dict, it could include any information that users want to put to pass it to user defined mutator. The complete example code can be found in :githublink:`Mnasnet base model <examples/nas/multi-trial/mnasnet/base_mnasnet.py>`.
Starting an experiment is almost the same as using inline mutation APIs. The only difference is that the applied mutators should be passed to ``RetiariiExperiment``. Below is a simple example. Starting an experiment is almost the same as using inline mutation APIs. The only difference is that the applied mutators should be passed to :class:`nni.retiarii.experiment.pytorch.RetiariiExperiment`. Below is a simple example.
.. code-block:: python .. code-block:: python
...@@ -62,3 +62,51 @@ Starting an experiment is almost the same as using inline mutation APIs. The onl ...@@ -62,3 +62,51 @@ Starting an experiment is almost the same as using inline mutation APIs. The onl
exp_config.max_trial_number = 10 exp_config.max_trial_number = 10
exp_config.training_service.use_active_gpu = False exp_config.training_service.use_active_gpu = False
exp.run(exp_config, 8081) exp.run(exp_config, 8081)
References
----------
Placeholder
^^^^^^^^^^^
.. autoclass:: nni.retiarii.nn.pytorch.Placeholder
:members:
:noindex:
Mutator
^^^^^^^
.. autoclass:: nni.retiarii.Mutator
:members:
:noindex:
.. autoclass:: nni.retiarii.Sampler
:members:
:noindex:
.. autoclass:: nni.retiarii.InvalidMutation
:members:
:noindex:
Graph
^^^^^
.. autoclass:: nni.retiarii.Model
:members:
:noindex:
.. autoclass:: nni.retiarii.Graph
:members:
:noindex:
.. autoclass:: nni.retiarii.Node
:members:
:noindex:
.. autoclass:: nni.retiarii.Edge
:members:
:noindex:
.. autoclass:: nni.retiarii.Operation
:members:
:noindex:
Retiarii API Reference
======================
nni.retiarii
------------
.. automodule:: nni.retiarii
:imported-members:
:members:
nni.retiarii.codegen
--------------------
.. automodule:: nni.retiarii.codegen
:imported-members:
:members:
nni.retiarii.converter
----------------------
.. automodule:: nni.retiarii.converter
:imported-members:
:members:
nni.retiarii.evaluator
----------------------
.. automodule:: nni.retiarii.evaluator
:imported-members:
:members:
.. automodule:: nni.retiarii.evaluator.pytorch
:imported-members:
:members:
nni.retiarii.execution
----------------------
.. automodule:: nni.retiarii.execution
:imported-members:
:members:
:undoc-members:
nni.retiarii.experiment.pytorch
-------------------------------
.. automodule:: nni.retiarii.experiment.pytorch
:members:
nni.retiarii.nn.pytorch
-----------------------
Please refer to:
* :doc:`construct_space`.
* :doc:`mutator`.
* `torch.nn reference <https://pytorch.org/docs/stable/nn.html>`_.
nni.retiarii.oneshot
--------------------
.. automodule:: nni.retiarii.oneshot
:imported-members:
:members:
nni.retiarii.operation_def
--------------------------
.. automodule:: nni.retiarii.operation_def
:imported-members:
:members:
nni.retiarii.strategy
---------------------
.. automodule:: nni.retiarii.strategy
:imported-members:
:members:
nni.retiarii.utils
------------------
.. automodule:: nni.retiarii.utils
:members:
.. footbibliography::
.. 0b36fb7844fd9cc88c4e74ad2c6b9ece
##########################
神经网络架构搜索
##########################
自动化的神经网络架构(NAS)搜索在寻找更好的模型方面发挥着越来越重要的作用。
最近的研究工作证明了自动化 NAS 的可行性,并发现了一些超越手动调整的模型。
代表工作有 NASNet, ENAS, DARTS, Network Morphism, 以及 Evolution 等。 此外,新的创新不断涌现。
但是,要实现 NAS 算法需要花费大量的精力,并且很难在新算法中重用现有算法的代码。
为了促进 NAS 创新 (如, 设计实现新的 NAS 模型,比较不同的 NAS 模型),
易于使用且灵活的编程接口非常重要。
因此,NNI 设计了 `Retiarii <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__, 它是一个深度学习框架,支持在神经网络模型空间,而不是单个神经网络模型上进行探索性训练。
Retiarii 的探索性训练允许用户以高度灵活的方式表达 *神经网络架构搜索* 和 *超参数调整* 的各种搜索空间。
本文档中的一些常用术语:
* *Model search space(模型搜索空间)* :它意味着一组模型,用于从中探索/搜索出最佳模型。 有时我们简称为 *search space(搜索空间)* 或 *model space(模型空间)* 。
* *Exploration strategy(探索策略)*:用于探索模型搜索空间的算法。
* *Model evaluator(模型评估器)*:用于训练模型并评估模型的性能。
按照以下说明开始您的 Retiarii 之旅。
.. toctree::
:maxdepth: 2
概述 <NAS/Overview>
快速入门 <NAS/QuickStart>
构建模型空间 <NAS/construct_space>
Multi-trial NAS <NAS/multi_trial_nas>
One-Shot NAS <NAS/one_shot_nas>
硬件相关 NAS <NAS/HardwareAwareNAS>
NAS 基准测试 <NAS/Benchmarks>
NAS API 参考 <NAS/ApiReference>
/* HPO */
@article{bergstra2011algorithms,
title={Algorithms for hyper-parameter optimization},
author={Bergstra, James and Bardenet, R{\'e}mi and Bengio, Yoshua and K{\'e}gl, Bal{\'a}zs},
journal={Advances in neural information processing systems},
volume={24},
year={2011}
}
/* NAS */
@inproceedings{zoph2017neural,
title={Neural Architecture Search with Reinforcement Learning},
author={Zoph, Barret and Le, Quoc V},
booktitle={International Conference on Learning Representations},
year={2017}
}
@inproceedings{zoph2018learning,
title={Learning transferable architectures for scalable image recognition},
author={Zoph, Barret and Vasudevan, Vijay and Shlens, Jonathon and Le, Quoc V},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={8697--8710},
year={2018}
}
@inproceedings{liu2018darts,
title={DARTS: Differentiable Architecture Search},
author={Liu, Hanxiao and Simonyan, Karen and Yang, Yiming},
booktitle={International Conference on Learning Representations},
year={2018}
}
@inproceedings{cai2018proxylessnas,
title={ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware},
author={Cai, Han and Zhu, Ligeng and Han, Song},
booktitle={International Conference on Learning Representations},
year={2018}
}
@inproceedings{xie2018snas,
title={SNAS: stochastic neural architecture search},
author={Xie, Sirui and Zheng, Hehui and Liu, Chunxiao and Lin, Liang},
booktitle={International Conference on Learning Representations},
year={2018}
}
@inproceedings{pham2018efficient,
title={Efficient neural architecture search via parameters sharing},
author={Pham, Hieu and Guan, Melody and Zoph, Barret and Le, Quoc and Dean, Jeff},
booktitle={International conference on machine learning},
pages={4095--4104},
year={2018},
organization={PMLR}
}
@inproceedings{radosavovic2019network,
title={On network design spaces for visual recognition},
author={Radosavovic, Ilija and Johnson, Justin and Xie, Saining and Lo, Wan-Yen and Doll{\'a}r, Piotr},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={1882--1890},
year={2019}
}
@inproceedings{ying2019bench,
title={Nas-bench-101: Towards reproducible neural architecture search},
author={Ying, Chris and Klein, Aaron and Christiansen, Eric and Real, Esteban and Murphy, Kevin and Hutter, Frank},
booktitle={International Conference on Machine Learning},
pages={7105--7114},
year={2019},
organization={PMLR}
}
@inproceedings{dong2019bench,
title={NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search},
author={Dong, Xuanyi and Yang, Yi},
booktitle={International Conference on Learning Representations},
year={2019}
}
...@@ -7,6 +7,5 @@ Python API Reference ...@@ -7,6 +7,5 @@ Python API Reference
:maxdepth: 1 :maxdepth: 1
Auto Tune <autotune_ref> Auto Tune <autotune_ref>
NAS <NAS/ApiReference>
Compression <Compression/CompressionReference> Compression <Compression/CompressionReference>
Python API <Tutorial/HowToLaunchFromPython> Python API <Tutorial/HowToLaunchFromPython>
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment