@@ -125,7 +125,7 @@ More detail example you could see:
Write a more advanced automl algorithm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`msg_dispatcher_base.py <nni/runtime/msg_dispatcher_base.py>`. Please refer to `here <CustomizeAdvisor.rst>`__ for how to write a customized advisor.
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`msg_dispatcher_base.py <nni/runtime/msg_dispatcher_base.py>`.
HPO Benchmark Example Statistics <hpo_benchmark_stats>
We provide a benchmarking tool to compare the performances of tuners provided by NNI (and users' custom tuners) on different
types of tasks. This tool uses the `automlbenchmark repository <https://github.com/openml/automlbenchmark)>`_ to run different *benchmarks* on the NNI *tuners*.
The tool is located in ``examples/trials/benchmarking/automlbenchmark``. This document provides a brief introduction to the tool, its usage, and currently available benchmarks.
@@ -34,7 +34,7 @@ In NNI, there are mainly four types of annotation:
**Arguments**
* **sampling_algo**\ : Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an ``nni.`` identification and a search space type specified in `SearchSpaceSpec <SearchSpaceSpec.rst>`__ such as ``choice`` or ``uniform``.
* **sampling_algo**\ : Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an ``nni.`` identification and a search space type specified in :doc:`SearchSpaceSpec <search_space>` such as ``choice`` or ``uniform``.
* **name**\ : The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.
There are 10 types to express your search space as follows:
``@nni.report_intermediate_result`` is used to report intermediate result, whose usage is the same as ``nni.report_intermediate_result`` in the doc of `Write a trial run on NNI <../TrialExample/Trials.rst>`__
``@nni.report_intermediate_result`` is used to report intermediate result, whose usage is the same as :func:`nni.report_intermediate_result`.
4. Annotate final result
^^^^^^^^^^^^^^^^^^^^^^^^
``'''@nni.report_final_result(metrics)'''``
``@nni.report_final_result`` is used to report the final result of the current trial, whose usage is the same as ``nni.report_final_result`` in the doc of `Write a trial run on NNI <../TrialExample/Trials.rst>`__
``@nni.report_final_result`` is used to report the final result of the current trial, whose usage is the same as :func:`nni.report_final_result`.
When using annotation, ``searchSpace`` and ``searchSpaceFile`` should not be specified manually.
* - debug
...
...
@@ -215,25 +210,25 @@ ExperimentConfig
* - tuner
- ``AlgorithmConfig``, optional
- Specify the tuner.
The built-in tuners can be found `here <../builtin_tuner.rst>`__ and you can follow `this tutorial <../Tuner/CustomizeTuner.rst>`__ to customize a new tuner.
The built-in tuners can be found :doc:`here </hpo/tuners>` and you can follow :doc:`this tutorial </hpo/custom_algorithm>` to customize a new tuner.
* - assessor
- ``AlgorithmConfig``, optional
- Specify the assessor.
The built-in assessors can be found `here <../builtin_assessor.rst>`__ and you can follow `this tutorial <../Assessor/CustomizeAssessor.rst>`__ to customize a new assessor.
The built-in assessors can be found :doc:`here </hpo/assessors>` and you can follow :doc:`this tutorial </hpo/custom_algorithm>` to customize a new assessor.
* - advisor
- ``AlgorithmConfig``, optional
- Specify the advisor.
NNI provides two built-in advisors: `BOHB <../Tuner/BohbAdvisor.rst>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__, and you can follow `this tutorial <../Tuner/CustomizeAdvisor.rst>`__ to customize a new advisor.
NNI provides two built-in advisors: :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>` and :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`.
* - trainingService
- ``TrainingServiceConfig``
- Specify the `training service <../TrainingService/Overview.rst>`__.
- Specify the :doc:`training service </experiment/training_service/overview>`.
* - sharedStorage
- ``SharedStorageConfig``, optional
- Configure the shared storage, detailed usage can be found `here <../Tutorial/HowToUseSharedStorage.rst>`__.
- Configure the shared storage, detailed usage can be found :doc:`here </experiment/training_service/shared_storage>`.
AlgorithmConfig
^^^^^^^^^^^^^^^
...
...
@@ -286,8 +281,8 @@ One of the following:
- `AmlConfig`_
- `DlcConfig`_
- `HybridConfig`_
For `Kubeflow <../TrainingService/KubeflowMode.rst>`_, `FrameworkController <../TrainingService/FrameworkControllerMode.rst>`_, and `AdaptDL <../TrainingService/AdaptDLMode.rst>`_ training platforms, it is suggested to use `v1 config schema <../Tutorial/ExperimentConfig.rst>`_ for now.
Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/openpai>`.
.. list-table::
:widths: 10 10 80
...
...
@@ -509,7 +504,7 @@ Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
AmlConfig
---------
Detailed usage can be found `here <../TrainingService/AMLMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/aml>`.
.. list-table::
:widths: 10 10 80
...
...
@@ -546,7 +541,7 @@ Detailed usage can be found `here <../TrainingService/AMLMode.rst>`__.
DlcConfig
---------
Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/paidlc>`.
.. list-table::
:widths: 10 10 80
...
...
@@ -611,7 +606,7 @@ Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__.
HybridConfig
------------
Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `AmlConfig`_ . Detailed usage can be found `here <../TrainingService/HybridMode.rst>`__.
Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `AmlConfig`_ . Detailed usage can be found :doc:`here </experiment/training_service/hybrid>`.
@@ -48,8 +48,7 @@ In this example, the search space is specified by a ``search_space.json`` file a
Benchmark code
^^^^^^^^^^^^^^
Benchmark code should receive a configuration from NNI manager, and report the corresponding benchmark result back. Following NNI APIs are designed for this purpose. In this example, writing operations per second (OPS) is used as a performance metric. Please refer to `here <Trials.rst>`__ for detailed information.
Benchmark code should receive a configuration from NNI manager, and report the corresponding benchmark result back. Following NNI APIs are designed for this purpose. In this example, writing operations per second (OPS) is used as a performance metric.
* Use ``nni.get_next_parameter()`` to get next system configuration.
* Use ``nni.report_final_result(metric)`` to report the benchmark result.