@@ -9,7 +9,7 @@ Execution engine is for running Retiarii Experiment. NNI supports three executio
...
@@ -9,7 +9,7 @@ Execution engine is for running Retiarii Experiment. NNI supports three executio
* **CGO execution engine** has the same requirements and capabilities as the **Graph-based execution engine**. But further enables cross-model optimizations, which makes model space exploration faster.
* **CGO execution engine** has the same requirements and capabilities as the **Graph-based execution engine**. But further enables cross-model optimizations, which makes model space exploration faster.
.. _pure-python-exeuction-engine:
.. _pure-python-execution-engine:
Pure-python Execution Engine
Pure-python Execution Engine
----------------------------
----------------------------
...
@@ -20,7 +20,7 @@ Rememeber to add :meth:`nni.retiarii.model_wrapper` decorator outside the whole
...
@@ -20,7 +20,7 @@ Rememeber to add :meth:`nni.retiarii.model_wrapper` decorator outside the whole
.. note:: You should always use ``super().__init__()`` instead of ``super(MyNetwork, self).__init__()`` in the PyTorch model, because the latter one has issues with model wrapper.
.. note:: You should always use ``super().__init__()`` instead of ``super(MyNetwork, self).__init__()`` in the PyTorch model, because the latter one has issues with model wrapper.
The search space defines which architectures can be represented in principle. Incorporating prior knowledge about typical properties of architectures well-suited for a task can reduce the size of the search space and simplify the search. However, this also introduces a human bias, which may prevent finding novel architectural building blocks that go beyond the current human knowledge. Search space design can be very challenging for beginners, who might not possess the experience to balance the richness and simplicity.
The search space defines which architectures can be represented in principle. Incorporating prior knowledge about typical properties of architectures well-suited for a task can reduce the size of the search space and simplify the search. However, this also introduces a human bias, which may prevent finding novel architectural building blocks that go beyond the current human knowledge. Search space design can be very challenging for beginners, who might not possess the experience to balance the richness and simplicity.
In NNI, we provide a wide range of APIs to build the search space. There are :doc:`high-level APIs <construct_space>`, that enables incorporating human knowledge about what makes a good architecture or search space. There are also :doc:`low-level APIs <mutator>`, that is a list of primitives to construct a network from operator to operator.
In NNI, we provide a wide range of APIs to build the search space. There are :doc:`high-level APIs <construct_space>`, that enables the possibility to incorporate human knowledge about what makes a good architecture or search space. There are also :doc:`low-level APIs <mutator>`, that is a list of primitives to construct a network from operation to operation.
Exploration strategy
Exploration strategy
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
...
@@ -57,7 +57,7 @@ Performance estimation
...
@@ -57,7 +57,7 @@ Performance estimation
The objective of NAS is typically to find architectures that achieve high predictive performance on unseen data. Performance estimation refers to the process of estimating this performance. The problem with performance estimation is mostly its scalability, i.e., how can I run and manage multiple trials simultaneously.
The objective of NAS is typically to find architectures that achieve high predictive performance on unseen data. Performance estimation refers to the process of estimating this performance. The problem with performance estimation is mostly its scalability, i.e., how can I run and manage multiple trials simultaneously.
In NNI, we standardize this process is implemented with :doc:`evaluator <evaluator>`, which is responsible of estimating a model's performance. The choices of evaluators also range from the simplest option, e.g., to perform a standard training and validation of the architecture on data, to complex configurations and implementations. Evaluators are run in *trials*, where trials can be spawn onto distributed platforms with our powerful :doc:`training service </experiment/training_service/overview>`.
In NNI, we standardize this process is implemented with :doc:`evaluator <evaluator>`, which is responsible of estimating a model's performance. NNI has quite a few built-in supports of evaluators, ranging from the simplest option, e.g., to perform a standard training and validation of the architecture on data, to complex configurations and implementations. Evaluators are run in *trials*, where trials can be spawn onto distributed platforms with our powerful :doc:`training service </experiment/training_service/overview>`.
NNI 中当前的架构搜索框架由 `Retiarii: A Deep Learning Exploratory-Training Framework <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__ 的研究支撑,具有以下特点:
"for model_dict in exp.export_top_models(formatter='dict'):\n print(model_dict)\n\n# The output is `json` object which records the mutation actions of the top model.\n# If users want to output source code of the top model, they can use graph-based execution engine for the experiment,\n# by simply adding the following two lines.\n#\n# .. code-block:: python\n#\n# exp_config.execution_engine = 'base'\n# export_formatter = 'code'"
"for model_dict in exp.export_top_models(formatter='dict'):\n print(model_dict)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The output is ``json`` object which records the mutation actions of the top model.\nIf users want to output source code of the top model,\nthey can use `graph-based execution engine <graph-based-execution-engine>` for the experiment,\nby simply adding the following two lines.\n\n"