"docs/vscode:/vscode.git/clone" did not exist on "78b87dc25aa3cb5eab282354d9b001b90a75cca4"
Unverified Commit 055b42c0 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

More improvements on NAS documentation and cell interface (#4752)

parent 2d8f925b
docs/img/nasnet_cell.png

61.8 KB | W: | H:

docs/img/nasnet_cell.png

72.7 KB | W: | H:

docs/img/nasnet_cell.png
docs/img/nasnet_cell.png
docs/img/nasnet_cell.png
docs/img/nasnet_cell.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -158,8 +158,6 @@ toctree_check_whitelist = [
'index',
# FIXME: Other exceptions should be correctly handled.
'nas/index',
'nas/benchmarks',
'compression/index',
'compression/pruning',
'compression/quantization',
......
......@@ -15,7 +15,7 @@ NNI Documentation
:hidden:
Hyperparameter Optimization <hpo/index>
Neural Architecture Search <nas/index>
nas/toctree
Model Compression <compression/index>
feature_engineering/toctree
experiment/toctree
......@@ -44,7 +44,7 @@ NNI Documentation
**NNI (Neural Network Intelligence)** is a lightweight but powerful toolkit to help users **automate**:
* :doc:`Hyperparameter Optimization </hpo/overview>`
* :doc:`Neural Architecture Search </nas/index>`
* :doc:`Neural Architecture Search </nas/overview>`
* :doc:`Model Compression </compression/index>`
* :doc:`Feature Engineering </feature_engineering/overview>`
......@@ -159,14 +159,15 @@ NNI makes AutoML techniques plug-and-play
:title: Neural Architecture Search
:link: tutorials/hello_nas
.. code-block:: diff
.. code-block:: python
# define model space
- self.conv2 = nn.Conv2d(32, 64, 3, 1)
+ self.conv2 = nn.LayerChoice([
+ nn.Conv2d(32, 64, 3, 1),
+ DepthwiseSeparableConv(32, 64)
+ ])
class Model(nn.Module):
self.conv2 = nn.LayerChoice([
nn.Conv2d(32, 64, 3, 1),
DepthwiseSeparableConv(32, 64)
])
model_space = Model()
# search strategy + evaluator
strategy = RegularizedEvolution()
evaluator = FunctionalEvaluator(
......@@ -179,7 +180,7 @@ NNI makes AutoML techniques plug-and-play
.. codesnippetcard::
:icon: ../img/thumbnails/one-shot-nas-small.svg
:title: One-shot NAS
:link: nas/index
:link: nas/exploration_strategy
.. code-block::
......
.. e604b6ad83ae8de856b569c841feafea
.. b1421b75629e06cb368f4c02a12a5f7d
###########################
Neural Network Intelligence
......@@ -14,7 +14,7 @@ Neural Network Intelligence
安装 <installation>
教程<examples>
超参调优 <hpo/index>
神经网络架构搜索<nas/index>
神经网络架构搜索<nas/toctree>
模型压缩<compression/index>
特征工程<feature_engineering/toctree>
NNI实验 <experiment/toctree>
......
......@@ -9,4 +9,4 @@ Advanced Usage
mutator
customize_strategy
serialization
benchmarks
benchmarks_toctree
NAS Benchmark
=============
.. toctree::
:hidden:
Example usage of NAS benchmarks </tutorials/nasbench_as_dataset>
.. note:: :doc:`Example usage of NAS benchmarks </tutorials/nasbench_as_dataset>`.
To improve the reproducibility of NAS algorithms as well as reducing computing resource requirements, researchers proposed a series of NAS benchmarks such as `NAS-Bench-101 <https://arxiv.org/abs/1902.09635>`__, `NAS-Bench-201 <https://arxiv.org/abs/2001.00326>`__, `NDS <https://arxiv.org/abs/1905.13214>`__, etc. NNI provides a query interface for users to acquire these benchmarks. Within just a few lines of code, researcher are able to evaluate their NAS algorithms easily and fairly by utilizing these benchmarks.
......
NAS Benchmark
=============
.. toctree::
:hidden:
Overview <benchmarks>
Examples </tutorials/nasbench_as_dataset>
......@@ -9,6 +9,8 @@ Execution engine is for running Retiarii Experiment. NNI supports three executio
* **CGO execution engine** has the same requirements and capabilities as the **Graph-based execution engine**. But further enables cross-model optimizations, which makes model space exploration faster.
.. _pure-python-exeuction-engine:
Pure-python Execution Engine
----------------------------
......@@ -18,6 +20,8 @@ Rememeber to add :meth:`nni.retiarii.model_wrapper` decorator outside the whole
.. note:: You should always use ``super().__init__()`` instead of ``super(MyNetwork, self).__init__()`` in the PyTorch model, because the latter one has issues with model wrapper.
.. _graph-based-exeuction-engine:
Graph-based Execution Engine
----------------------------
......
Neural Architecture Search
==========================
.. toctree::
:hidden:
Quickstart </tutorials/hello_nas>
construct_space
exploration_strategy
evaluator
advanced_usage
Overview
========
.. attention:: NNI's latest NAS supports are all based on Retiarii Framework, users who are still on `early version using NNI NAS v1.0 <https://nni.readthedocs.io/en/v2.2/nas.html>`__ shall migrate your work to Retiarii as soon as possible. We plan to remove the legacy NAS framework in the next few releases.
......
Neural Architecture Search
==========================
.. toctree::
:hidden:
overview
Quickstart </tutorials/hello_nas>
construct_space
exploration_strategy
evaluator
advanced_usage
......@@ -62,7 +62,7 @@ This is a core and basic feature of NNI, we provide many popular :doc:`automatic
General NAS Framework
^^^^^^^^^^^^^^^^^^^^^
This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found :doc:`here <../nas/index>`.
This NAS framework is for users to easily specify candidate neural architectures, for example, one can specify multiple candidate operations (e.g., separable conv, dilated conv) for a single layer, and specify possible skip connections. NNI will find the best candidate automatically. On the other hand, the NAS framework provides a simple interface for another type of user (e.g., NAS algorithm researchers) to implement new NAS algorithms. A detailed description of NAS and its usage can be found :doc:`here </nas/overview>`.
NNI has support for many one-shot NAS algorithms such as ENAS and DARTS through NNI trial SDK. To use these algorithms you do not have to start an NNI experiment. Instead, import an algorithm in your trial code and simply run your trial code. If you want to tune the hyperparameters in the algorithms or want to run multiple instances, you can choose a tuner and start an NNI experiment.
......
......@@ -382,7 +382,7 @@ Implementation
The implementation on NNI is based on the `offical implementation <https://github.com/mit-han-lab/ProxylessNAS>`__. The official implementation supports two training approaches: gradient descent and RL based. In our current implementation on NNI, gradient descent training approach is supported. The complete support of ProxylessNAS is ongoing.
The official implementation supports different targeted hardware, including 'mobile', 'cpu', 'gpu8', 'flops'. In NNI repo, the hardware latency prediction is supported by `Microsoft nn-Meter <https://github.com/microsoft/nn-Meter>`__. nn-Meter is an accurate inference latency predictor for DNN models on diverse edge devices. nn-Meter support four hardwares up to now, including ``cortexA76cpu_tflite21``, ``adreno640gpu_tflite21``, ``adreno630gpu_tflite21``, and ``myriadvpu_openvino2019r2``. Users can find more information about nn-Meter on its website. More hardware will be supported in the future. Users could find more details about applying ``nn-Meter`` `here <./HardwareAwareNAS.rst>`__ .
The official implementation supports different targeted hardware, including 'mobile', 'cpu', 'gpu8', 'flops'. In NNI repo, the hardware latency prediction is supported by `Microsoft nn-Meter <https://github.com/microsoft/nn-Meter>`__. nn-Meter is an accurate inference latency predictor for DNN models on diverse edge devices. nn-Meter support four hardwares up to now, including ``cortexA76cpu_tflite21``, ``adreno640gpu_tflite21``, ``adreno630gpu_tflite21``, and ``myriadvpu_openvino2019r2``. Users can find more information about nn-Meter on its website. More hardware will be supported in the future. Users could find more details about applying ``nn-Meter`` :doc:`here </nas/hardware_aware_nas>`.
Below we will describe implementation details. Like other one-shot NAS algorithms on NNI, ProxylessNAS is composed of two parts: *search space* and *training approach*. For users to flexibly define their own search space and use built-in ProxylessNAS training approach, please refer to :githublink:`example code <examples/nas/oneshot/proxylessnas>` for a reference.
......
......@@ -58,7 +58,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This example uses two mutation APIs, ``nn.LayerChoice`` and ``nn.ValueChoice``.\n``nn.LayerChoice`` takes a list of candidate modules (two in this example), one will be chosen for each sampled model.\nIt can be used like normal PyTorch module.\n``nn.ValueChoice`` takes a list of candidate values, one will be chosen to take effect for each sampled model.\n\nMore detailed API description and usage can be found :doc:`here </nas/construct_space>`.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>We are actively enriching the mutation APIs, to facilitate easy construction of model space.\n If the currently supported mutation APIs cannot express your model space,\n please refer to :doc:`this doc </nas/mutator>` for customizing mutators.</p></div>\n\n## Explore the Defined Model Space\n\nThere are basically two exploration approaches: (1) search by evaluating each sampled model independently,\nwhich is the search approach in `multi-trial NAS <multi-trial-nas>`\nand (2) one-shot weight-sharing based search, which is used in one-shot NAS.\nWe demonstrate the first approach in this tutorial. Users can refer to `here <one-shot-nas>` for the second approach.\n\nFirst, users need to pick a proper exploration strategy to explore the defined model space.\nSecond, users need to pick or customize a model evaluator to evaluate the performance of each explored model.\n\n### Pick an exploration strategy\n\nRetiarii supports many :doc:`exploration strategies </nas/exploration_strategy>`.\n\nSimply choosing (i.e., instantiate) an exploration strategy as below.\n\n"
"This example uses two mutation APIs,\n:class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>` and\n:class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>`.\n:class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>`\ntakes a list of candidate modules (two in this example), one will be chosen for each sampled model.\nIt can be used like normal PyTorch module.\n:class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>` takes a list of candidate values,\none will be chosen to take effect for each sampled model.\n\nMore detailed API description and usage can be found :doc:`here </nas/construct_space>`.\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>We are actively enriching the mutation APIs, to facilitate easy construction of model space.\n If the currently supported mutation APIs cannot express your model space,\n please refer to :doc:`this doc </nas/mutator>` for customizing mutators.</p></div>\n\n## Explore the Defined Model Space\n\nThere are basically two exploration approaches: (1) search by evaluating each sampled model independently,\nwhich is the search approach in `multi-trial NAS <multi-trial-nas>`\nand (2) one-shot weight-sharing based search, which is used in one-shot NAS.\nWe demonstrate the first approach in this tutorial. Users can refer to `here <one-shot-nas>` for the second approach.\n\nFirst, users need to pick a proper exploration strategy to explore the defined model space.\nSecond, users need to pick or customize a model evaluator to evaluate the performance of each explored model.\n\n### Pick an exploration strategy\n\nRetiarii supports many :doc:`exploration strategies </nas/exploration_strategy>`.\n\nSimply choosing (i.e., instantiate) an exploration strategy as below.\n\n"
]
},
{
......@@ -76,7 +76,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Pick or customize a model evaluator\n\nIn the exploration process, the exploration strategy repeatedly generates new models. A model evaluator is for training\nand validating each generated model to obtain the model's performance.\nThe performance is sent to the exploration strategy for the strategy to generate better models.\n\nRetiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, but to start with,\nit is recommended to use ``FunctionalEvaluator``, that is, to wrap your own training and evaluation code with one single function.\nThis function should receive one single model class and uses ``nni.report_final_result`` to report the final score of this model.\n\nAn example here creates a simple evaluator that runs on MNIST dataset, trains for 2 epochs, and reports its validation accuracy.\n\n"
"### Pick or customize a model evaluator\n\nIn the exploration process, the exploration strategy repeatedly generates new models. A model evaluator is for training\nand validating each generated model to obtain the model's performance.\nThe performance is sent to the exploration strategy for the strategy to generate better models.\n\nRetiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, but to start with,\nit is recommended to use :class:`FunctionalEvaluator <nni.retiarii.evaluator.FunctionalEvaluator>`,\nthat is, to wrap your own training and evaluation code with one single function.\nThis function should receive one single model class and uses :func:`nni.report_final_result` to report the final score of this model.\n\nAn example here creates a simple evaluator that runs on MNIST dataset, trains for 2 epochs, and reports its validation accuracy.\n\n"
]
},
{
......@@ -112,7 +112,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The ``train_epoch`` and ``test_epoch`` here can be any customized function, where users can write their own training recipe.\n\nIt is recommended that the :doc:``evaluate_model`` here accepts no additional arguments other than ``model_cls``.\nHowever, in the `advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.\nIn future, we will support mutation on the arguments of evaluators, which is commonly called \"Hyper-parmeter tuning\".\n\n## Launch an Experiment\n\nAfter all the above are prepared, it is time to start an experiment to do the model search. An example is shown below.\n\n"
"The ``train_epoch`` and ``test_epoch`` here can be any customized function,\nwhere users can write their own training recipe.\n\nIt is recommended that the ``evaluate_model`` here accepts no additional arguments other than ``model_cls``.\nHowever, in the :doc:`advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.\nIn future, we will support mutation on the arguments of evaluators, which is commonly called \"Hyper-parmeter tuning\".\n\n## Launch an Experiment\n\nAfter all the above are prepared, it is time to start an experiment to do the model search. An example is shown below.\n\n"
]
},
{
......@@ -184,7 +184,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service>`\nbesides ``local`` training service.\n\n## Visualize the Experiment\n\nUsers can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment.\nFor example, open ``localhost:8081`` in your browser, 8081 is the port that you set in ``exp.run``.\nPlease refer to :doc:`here </experiment/webui>` for details.\n\nWe support visualizing models with 3rd-party visualization engines (like `Netron <https://netron.app/>`__).\nThis can be used by clicking ``Visualization`` in detail panel for each trial.\nNote that current visualization is based on `onnx <https://onnx.ai/>`__ ,\nthus visualization is not feasible if the model cannot be exported into onnx.\n\nBuilt-in evaluators (e.g., Classification) will automatically export the model into a file.\nFor your own evaluator, you need to save your file into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work.\nFor instance,\n\n"
"Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service/overview>`\nbesides ``local`` training service.\n\n## Visualize the Experiment\n\nUsers can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment.\nFor example, open ``localhost:8081`` in your browser, 8081 is the port that you set in ``exp.run``.\nPlease refer to :doc:`here </experiment/web_portal/web_portal>` for details.\n\nWe support visualizing models with 3rd-party visualization engines (like `Netron <https://netron.app/>`__).\nThis can be used by clicking ``Visualization`` in detail panel for each trial.\nNote that current visualization is based on `onnx <https://onnx.ai/>`__ ,\nthus visualization is not feasible if the model cannot be exported into onnx.\n\nBuilt-in evaluators (e.g., Classification) will automatically export the model into a file.\nFor your own evaluator, you need to save your file into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work.\nFor instance,\n\n"
]
},
{
......@@ -202,7 +202,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Relaunch the experiment, and a button is shown on WebUI.\n\n<img src=\"file://../../img/netron_entrance_webui.png\">\n\n## Export Top Models\n\nUsers can export top models after the exploration is done using ``export_top_models``.\n\n"
"Relaunch the experiment, and a button is shown on Web portal.\n\n<img src=\"file://../../img/netron_entrance_webui.png\">\n\n## Export Top Models\n\nUsers can export top models after the exploration is done using ``export_top_models``.\n\n"
]
},
{
......
......@@ -145,10 +145,14 @@ model_space = ModelSpace()
model_space
# %%
# This example uses two mutation APIs, ``nn.LayerChoice`` and ``nn.ValueChoice``.
# ``nn.LayerChoice`` takes a list of candidate modules (two in this example), one will be chosen for each sampled model.
# This example uses two mutation APIs,
# :class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>` and
# :class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>`.
# :class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>`
# takes a list of candidate modules (two in this example), one will be chosen for each sampled model.
# It can be used like normal PyTorch module.
# ``nn.ValueChoice`` takes a list of candidate values, one will be chosen to take effect for each sampled model.
# :class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>` takes a list of candidate values,
# one will be chosen to take effect for each sampled model.
#
# More detailed API description and usage can be found :doc:`here </nas/construct_space>`.
#
......@@ -188,8 +192,9 @@ search_strategy = strategy.Random(dedup=True) # dedup=False if deduplication is
# The performance is sent to the exploration strategy for the strategy to generate better models.
#
# Retiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, but to start with,
# it is recommended to use ``FunctionalEvaluator``, that is, to wrap your own training and evaluation code with one single function.
# This function should receive one single model class and uses ``nni.report_final_result`` to report the final score of this model.
# it is recommended to use :class:`FunctionalEvaluator <nni.retiarii.evaluator.FunctionalEvaluator>`,
# that is, to wrap your own training and evaluation code with one single function.
# This function should receive one single model class and uses :func:`nni.report_final_result` to report the final score of this model.
#
# An example here creates a simple evaluator that runs on MNIST dataset, trains for 2 epochs, and reports its validation accuracy.
......@@ -268,10 +273,11 @@ evaluator = FunctionalEvaluator(evaluate_model)
# %%
#
# The ``train_epoch`` and ``test_epoch`` here can be any customized function, where users can write their own training recipe.
# The ``train_epoch`` and ``test_epoch`` here can be any customized function,
# where users can write their own training recipe.
#
# It is recommended that the :doc:``evaluate_model`` here accepts no additional arguments other than ``model_cls``.
# However, in the `advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.
# It is recommended that the ``evaluate_model`` here accepts no additional arguments other than ``model_cls``.
# However, in the :doc:`advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.
# In future, we will support mutation on the arguments of evaluators, which is commonly called "Hyper-parmeter tuning".
#
# Launch an Experiment
......@@ -303,7 +309,7 @@ exp_config.training_service.use_active_gpu = True
exp.run(exp_config, 8081)
# %%
# Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service>`
# Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service/overview>`
# besides ``local`` training service.
#
# Visualize the Experiment
......@@ -311,7 +317,7 @@ exp.run(exp_config, 8081)
#
# Users can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment.
# For example, open ``localhost:8081`` in your browser, 8081 is the port that you set in ``exp.run``.
# Please refer to :doc:`here </experiment/webui>` for details.
# Please refer to :doc:`here </experiment/web_portal/web_portal>` for details.
#
# We support visualizing models with 3rd-party visualization engines (like `Netron <https://netron.app/>`__).
# This can be used by clicking ``Visualization`` in detail panel for each trial.
......@@ -336,7 +342,7 @@ def evaluate_model_with_visualization(model_cls):
evaluate_model(model_cls)
# %%
# Relaunch the experiment, and a button is shown on WebUI.
# Relaunch the experiment, and a button is shown on Web portal.
#
# .. image:: ../../img/netron_entrance_webui.png
#
......
6b66fe7afb47bb8f9a4124c8083e2930
\ No newline at end of file
be654727f3e5e43571f23dcb9a871abf
\ No newline at end of file
......@@ -205,12 +205,16 @@ This results in the following code:
.. GENERATED FROM PYTHON SOURCE LINES 148-178
.. GENERATED FROM PYTHON SOURCE LINES 148-182
This example uses two mutation APIs, ``nn.LayerChoice`` and ``nn.ValueChoice``.
``nn.LayerChoice`` takes a list of candidate modules (two in this example), one will be chosen for each sampled model.
This example uses two mutation APIs,
:class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>` and
:class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>`.
:class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>`
takes a list of candidate modules (two in this example), one will be chosen for each sampled model.
It can be used like normal PyTorch module.
``nn.ValueChoice`` takes a list of candidate values, one will be chosen to take effect for each sampled model.
:class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>` takes a list of candidate values,
one will be chosen to take effect for each sampled model.
More detailed API description and usage can be found :doc:`here </nas/construct_space>`.
......@@ -238,7 +242,7 @@ Retiarii supports many :doc:`exploration strategies </nas/exploration_strategy>`
Simply choosing (i.e., instantiate) an exploration strategy as below.
.. GENERATED FROM PYTHON SOURCE LINES 178-182
.. GENERATED FROM PYTHON SOURCE LINES 182-186
.. code-block:: default
......@@ -256,8 +260,6 @@ Simply choosing (i.e., instantiate) an exploration strategy as below.
.. code-block:: none
[2022-02-28 14:01:11] INFO (hyperopt.utils/MainThread) Failed to load dill, try installing dill via "pip install dill" for enhanced pickling support.
[2022-02-28 14:01:11] INFO (hyperopt.fmin/MainThread) Failed to load dill, try installing dill via "pip install dill" for enhanced pickling support.
/home/yugzhan/miniconda3/envs/cu102/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
warnings.warn(
......@@ -265,7 +267,7 @@ Simply choosing (i.e., instantiate) an exploration strategy as below.
.. GENERATED FROM PYTHON SOURCE LINES 183-195
.. GENERATED FROM PYTHON SOURCE LINES 187-200
Pick or customize a model evaluator
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
......@@ -275,12 +277,13 @@ and validating each generated model to obtain the model's performance.
The performance is sent to the exploration strategy for the strategy to generate better models.
Retiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, but to start with,
it is recommended to use ``FunctionalEvaluator``, that is, to wrap your own training and evaluation code with one single function.
This function should receive one single model class and uses ``nni.report_final_result`` to report the final score of this model.
it is recommended to use :class:`FunctionalEvaluator <nni.retiarii.evaluator.FunctionalEvaluator>`,
that is, to wrap your own training and evaluation code with one single function.
This function should receive one single model class and uses :func:`nni.report_final_result` to report the final score of this model.
An example here creates a simple evaluator that runs on MNIST dataset, trains for 2 epochs, and reports its validation accuracy.
.. GENERATED FROM PYTHON SOURCE LINES 195-263
.. GENERATED FROM PYTHON SOURCE LINES 200-268
.. code-block:: default
......@@ -359,11 +362,11 @@ An example here creates a simple evaluator that runs on MNIST dataset, trains fo
.. GENERATED FROM PYTHON SOURCE LINES 264-265
.. GENERATED FROM PYTHON SOURCE LINES 269-270
Create the evaluator
.. GENERATED FROM PYTHON SOURCE LINES 265-269
.. GENERATED FROM PYTHON SOURCE LINES 270-274
.. code-block:: default
......@@ -378,12 +381,13 @@ Create the evaluator
.. GENERATED FROM PYTHON SOURCE LINES 270-280
.. GENERATED FROM PYTHON SOURCE LINES 275-286
The ``train_epoch`` and ``test_epoch`` here can be any customized function, where users can write their own training recipe.
The ``train_epoch`` and ``test_epoch`` here can be any customized function,
where users can write their own training recipe.
It is recommended that the :doc:``evaluate_model`` here accepts no additional arguments other than ``model_cls``.
However, in the `advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.
It is recommended that the ``evaluate_model`` here accepts no additional arguments other than ``model_cls``.
However, in the :doc:`advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.
In future, we will support mutation on the arguments of evaluators, which is commonly called "Hyper-parmeter tuning".
Launch an Experiment
......@@ -391,7 +395,7 @@ Launch an Experiment
After all the above are prepared, it is time to start an experiment to do the model search. An example is shown below.
.. GENERATED FROM PYTHON SOURCE LINES 281-287
.. GENERATED FROM PYTHON SOURCE LINES 287-293
.. code-block:: default
......@@ -408,11 +412,11 @@ After all the above are prepared, it is time to start an experiment to do the mo
.. GENERATED FROM PYTHON SOURCE LINES 288-289
.. GENERATED FROM PYTHON SOURCE LINES 294-295
The following configurations are useful to control how many trials to run at most / at the same time.
.. GENERATED FROM PYTHON SOURCE LINES 289-293
.. GENERATED FROM PYTHON SOURCE LINES 295-299
.. code-block:: default
......@@ -427,12 +431,12 @@ The following configurations are useful to control how many trials to run at mos
.. GENERATED FROM PYTHON SOURCE LINES 294-296
.. GENERATED FROM PYTHON SOURCE LINES 300-302
Remember to set the following config if you want to GPU.
``use_active_gpu`` should be set true if you wish to use an occupied GPU (possibly running a GUI).
.. GENERATED FROM PYTHON SOURCE LINES 296-300
.. GENERATED FROM PYTHON SOURCE LINES 302-306
.. code-block:: default
......@@ -447,11 +451,11 @@ Remember to set the following config if you want to GPU.
.. GENERATED FROM PYTHON SOURCE LINES 301-302
.. GENERATED FROM PYTHON SOURCE LINES 307-308
Launch the experiment. The experiment should take several minutes to finish on a workstation with 2 GPUs.
.. GENERATED FROM PYTHON SOURCE LINES 302-305
.. GENERATED FROM PYTHON SOURCE LINES 308-311
.. code-block:: default
......@@ -462,31 +466,10 @@ Launch the experiment. The experiment should take several minutes to finish on a
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
[2022-02-28 14:01:13] INFO (nni.experiment/MainThread) Creating experiment, Experiment ID: dt84p16a
[2022-02-28 14:01:13] INFO (nni.experiment/MainThread) Connecting IPC pipe...
[2022-02-28 14:01:14] INFO (nni.experiment/MainThread) Starting web server...
[2022-02-28 14:01:15] INFO (nni.experiment/MainThread) Setting up...
[2022-02-28 14:01:15] INFO (nni.runtime.msg_dispatcher_base/Thread-3) Dispatcher started
[2022-02-28 14:01:15] INFO (nni.retiarii.experiment.pytorch/MainThread) Web UI URLs: http://127.0.0.1:8081 http://10.190.172.35:8081 http://192.168.49.1:8081 http://172.17.0.1:8081
[2022-02-28 14:01:15] INFO (nni.retiarii.experiment.pytorch/MainThread) Start strategy...
[2022-02-28 14:01:15] INFO (root/MainThread) Successfully update searchSpace.
[2022-02-28 14:01:15] INFO (nni.retiarii.strategy.bruteforce/MainThread) Random search running in fixed size mode. Dedup: on.
[2022-02-28 14:05:16] INFO (nni.retiarii.experiment.pytorch/Thread-4) Stopping experiment, please wait...
[2022-02-28 14:05:16] INFO (nni.retiarii.experiment.pytorch/MainThread) Strategy exit
[2022-02-28 14:05:16] INFO (nni.retiarii.experiment.pytorch/MainThread) Waiting for experiment to become DONE (you can ctrl+c if there is no running trial jobs)...
[2022-02-28 14:05:17] INFO (nni.runtime.msg_dispatcher_base/Thread-3) Dispatcher exiting...
[2022-02-28 14:05:17] INFO (nni.retiarii.experiment.pytorch/Thread-4) Experiment stopped
.. GENERATED FROM PYTHON SOURCE LINES 306-324
.. GENERATED FROM PYTHON SOURCE LINES 312-330
Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service/overview>`
besides ``local`` training service.
......@@ -507,7 +490,7 @@ Built-in evaluators (e.g., Classification) will automatically export the model i
For your own evaluator, you need to save your file into ``$NNI_OUTPUT_DIR/model.onnx`` to make this work.
For instance,
.. GENERATED FROM PYTHON SOURCE LINES 324-338
.. GENERATED FROM PYTHON SOURCE LINES 330-344
.. code-block:: default
......@@ -532,9 +515,9 @@ For instance,
.. GENERATED FROM PYTHON SOURCE LINES 339-347
.. GENERATED FROM PYTHON SOURCE LINES 345-353
Relaunch the experiment, and a button is shown on webportal.
Relaunch the experiment, and a button is shown on Web portal.
.. image:: ../../img/netron_entrance_webui.png
......@@ -543,7 +526,7 @@ Export Top Models
Users can export top models after the exploration is done using ``export_top_models``.
.. GENERATED FROM PYTHON SOURCE LINES 347-359
.. GENERATED FROM PYTHON SOURCE LINES 353-365
.. code-block:: default
......@@ -569,7 +552,7 @@ Users can export top models after the exploration is done using ``export_top_mod
.. code-block:: none
{'model_1': '0', 'model_2': 0.25, 'model_3': 128}
{'model_1': '0', 'model_2': 0.75, 'model_3': 128}
......@@ -577,7 +560,7 @@ Users can export top models after the exploration is done using ``export_top_mod
.. rst-class:: sphx-glr-timing
**Total running time of the script:** ( 4 minutes 6.818 seconds)
**Total running time of the script:** ( 2 minutes 15.810 seconds)
.. _sphx_glr_download_tutorials_hello_nas.py:
......
......@@ -5,17 +5,17 @@
Computation times
=================
**01:38.500** total execution time for **tutorials** files:
**02:15.810** total execution time for **tutorials** files:
+-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_pruning_quick_start_mnist.py` (``pruning_quick_start_mnist.py``) | 01:38.500 | 0.0 MB |
+-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_hello_nas.py` (``hello_nas.py``) | 00:00.000 | 0.0 MB |
| :ref:`sphx_glr_tutorials_hello_nas.py` (``hello_nas.py``) | 02:15.810 | 0.0 MB |
+-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_nasbench_as_dataset.py` (``nasbench_as_dataset.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_pruning_customize.py` (``pruning_customize.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_pruning_quick_start_mnist.py` (``pruning_quick_start_mnist.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_pruning_speedup.py` (``pruning_speedup.py``) | 00:00.000 | 0.0 MB |
+-----------------------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_tutorials_quantization_customize.py` (``quantization_customize.py``) | 00:00.000 | 0.0 MB |
......
......@@ -145,10 +145,14 @@ model_space = ModelSpace()
model_space
# %%
# This example uses two mutation APIs, ``nn.LayerChoice`` and ``nn.ValueChoice``.
# ``nn.LayerChoice`` takes a list of candidate modules (two in this example), one will be chosen for each sampled model.
# This example uses two mutation APIs,
# :class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>` and
# :class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>`.
# :class:`nn.LayerChoice <nni.retiarii.nn.pytorch.LayerChoice>`
# takes a list of candidate modules (two in this example), one will be chosen for each sampled model.
# It can be used like normal PyTorch module.
# ``nn.ValueChoice`` takes a list of candidate values, one will be chosen to take effect for each sampled model.
# :class:`nn.InputChoice <nni.retiarii.nn.pytorch.ValueChoice>` takes a list of candidate values,
# one will be chosen to take effect for each sampled model.
#
# More detailed API description and usage can be found :doc:`here </nas/construct_space>`.
#
......@@ -188,8 +192,9 @@ search_strategy = strategy.Random(dedup=True) # dedup=False if deduplication is
# The performance is sent to the exploration strategy for the strategy to generate better models.
#
# Retiarii has provided :doc:`built-in model evaluators </nas/evaluator>`, but to start with,
# it is recommended to use ``FunctionalEvaluator``, that is, to wrap your own training and evaluation code with one single function.
# This function should receive one single model class and uses ``nni.report_final_result`` to report the final score of this model.
# it is recommended to use :class:`FunctionalEvaluator <nni.retiarii.evaluator.FunctionalEvaluator>`,
# that is, to wrap your own training and evaluation code with one single function.
# This function should receive one single model class and uses :func:`nni.report_final_result` to report the final score of this model.
#
# An example here creates a simple evaluator that runs on MNIST dataset, trains for 2 epochs, and reports its validation accuracy.
......@@ -268,10 +273,11 @@ evaluator = FunctionalEvaluator(evaluate_model)
# %%
#
# The ``train_epoch`` and ``test_epoch`` here can be any customized function, where users can write their own training recipe.
# The ``train_epoch`` and ``test_epoch`` here can be any customized function,
# where users can write their own training recipe.
#
# It is recommended that the :doc:``evaluate_model`` here accepts no additional arguments other than ``model_cls``.
# However, in the `advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.
# It is recommended that the ``evaluate_model`` here accepts no additional arguments other than ``model_cls``.
# However, in the :doc:`advanced tutorial </nas/evaluator>`, we will show how to use additional arguments in case you actually need those.
# In future, we will support mutation on the arguments of evaluators, which is commonly called "Hyper-parmeter tuning".
#
# Launch an Experiment
......@@ -303,7 +309,7 @@ exp_config.training_service.use_active_gpu = True
exp.run(exp_config, 8081)
# %%
# Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service>`
# Users can also run Retiarii Experiment with :doc:`different training services </experiment/training_service/overview>`
# besides ``local`` training service.
#
# Visualize the Experiment
......@@ -311,7 +317,7 @@ exp.run(exp_config, 8081)
#
# Users can visualize their experiment in the same way as visualizing a normal hyper-parameter tuning experiment.
# For example, open ``localhost:8081`` in your browser, 8081 is the port that you set in ``exp.run``.
# Please refer to :doc:`here </experiment/webui>` for details.
# Please refer to :doc:`here </experiment/web_portal/web_portal>` for details.
#
# We support visualizing models with 3rd-party visualization engines (like `Netron <https://netron.app/>`__).
# This can be used by clicking ``Visualization`` in detail panel for each trial.
......@@ -336,7 +342,7 @@ def evaluate_model_with_visualization(model_cls):
evaluate_model(model_cls)
# %%
# Relaunch the experiment, and a button is shown on WebUI.
# Relaunch the experiment, and a button is shown on Web portal.
#
# .. image:: ../../img/netron_entrance_webui.png
#
......
......@@ -158,7 +158,7 @@ class Classification(Lightning):
If the ``lightning_module`` has a predefined val_dataloaders method this will be skipped.
trainer_kwargs : dict
Optional keyword arguments passed to trainer. See
`Lightning documentation <https://pytorch-lightning.readthedocs.io/en/stable/trainer.html>`__ for details.
`Lightning documentation <https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html>`__ for details.
"""
def __init__(self, criterion: nn.Module = nn.CrossEntropyLoss,
......@@ -206,7 +206,7 @@ class Regression(Lightning):
If the ``lightning_module`` has a predefined val_dataloaders method this will be skipped.
trainer_kwargs : dict
Optional keyword arguments passed to trainer. See
`Lightning documentation <https://pytorch-lightning.readthedocs.io/en/stable/trainer.html>`__ for details.
`Lightning documentation <https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html>`__ for details.
"""
def __init__(self, criterion: nn.Module = nn.MSELoss,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment