Unverified Commit fb62e80f authored by liuzhe-lz's avatar liuzhe-lz Committed by GitHub
Browse files

Fix doc links (#4757)

parent dda6a126
......@@ -8,3 +8,4 @@ Hyperparameter Optimization
Implement Custom Tuners and Assessors <custom_algorithm>
Install Custom or 3rd-party Tuners and Assessors <custom_algorithm_installation>
Tuner Benchmark <hpo_benchmark>
Tuner Benchmark Example Statistics <hpo_benchmark_stats>
......@@ -125,7 +125,7 @@ More detail example you could see:
Write a more advanced automl algorithm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`msg_dispatcher_base.py <nni/runtime/msg_dispatcher_base.py>`. Please refer to `here <CustomizeAdvisor.rst>`__ for how to write a customized advisor.
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`msg_dispatcher_base.py <nni/runtime/msg_dispatcher_base.py>`.
Customize Assessor
------------------
......
HPO Benchmarks
==============
.. toctree::
:hidden:
HPO Benchmark Example Statistics <hpo_benchmark_stats>
We provide a benchmarking tool to compare the performances of tuners provided by NNI (and users' custom tuners) on different
types of tasks. This tool uses the `automlbenchmark repository <https://github.com/openml/automlbenchmark)>`_ to run different *benchmarks* on the NNI *tuners*.
The tool is located in ``examples/trials/benchmarking/automlbenchmark``. This document provides a brief introduction to the tool, its usage, and currently available benchmarks.
......
......@@ -34,7 +34,7 @@ In NNI, there are mainly four types of annotation:
**Arguments**
* **sampling_algo**\ : Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an ``nni.`` identification and a search space type specified in `SearchSpaceSpec <SearchSpaceSpec.rst>`__ such as ``choice`` or ``uniform``.
* **sampling_algo**\ : Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an ``nni.`` identification and a search space type specified in :doc:`SearchSpaceSpec <search_space>` such as ``choice`` or ``uniform``.
* **name**\ : The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.
There are 10 types to express your search space as follows:
......@@ -93,11 +93,11 @@ An example here is:
``'''@nni.report_intermediate_result(metrics)'''``
``@nni.report_intermediate_result`` is used to report intermediate result, whose usage is the same as ``nni.report_intermediate_result`` in the doc of `Write a trial run on NNI <../TrialExample/Trials.rst>`__
``@nni.report_intermediate_result`` is used to report intermediate result, whose usage is the same as :func:`nni.report_intermediate_result`.
4. Annotate final result
^^^^^^^^^^^^^^^^^^^^^^^^
``'''@nni.report_final_result(metrics)'''``
``@nni.report_final_result`` is used to report the final result of the current trial, whose usage is the same as ``nni.report_final_result`` in the doc of `Write a trial run on NNI <../TrialExample/Trials.rst>`__
``@nni.report_final_result`` is used to report the final result of the current trial, whose usage is the same as :func:`nni.report_final_result`.
......@@ -20,11 +20,6 @@ A config file is needed when creating an experiment. This document describes the
4. Setting a field to ``None`` or ``null`` is equivalent to not setting the field.
.. contents:: Contents
:local:
:depth: 3
Examples
========
......@@ -120,13 +115,13 @@ ExperimentConfig
* - searchSpaceFile
- ``str``, optional
- Path_ to the JSON file containing the search space.
Search space format is determined by tuner. The common format for built-in tuners is documented `here <../Tutorial/SearchSpaceSpec.rst>`__.
Search space format is determined by tuner. The common format for built-in tuners is documented :doc:`here </hpo/search_space>`.
Mutually exclusive to ``searchSpace``.
* - searchSpace
- ``JSON``, optional
- Search space object.
The format is determined by tuner. Common format for built-in tuners is documented `here <../Tutorial/SearchSpaceSpec.rst>`__.
The format is determined by tuner. Common format for built-in tuners is documented :doc:`here </hpo/search_space>`.
Note that ``None`` means "no such field" so empty search space should be written as ``{}``.
Mutually exclusive to ``searchSpaceFile``.
......@@ -151,7 +146,7 @@ ExperimentConfig
- ``int`` or ``None``, optional
- Default: None. This field might have slightly different meanings for various training services,
especially when set to ``0`` or ``None``.
See `training service's document <../training_services.rst>`__ for details.
See :doc:`training service's document </experiment/training_service/overview>` for details.
In local mode, setting the field to ``0`` will prevent trials from accessing GPU (by empty ``CUDA_VISIBLE_DEVICES``).
And when set to ``None``, trials will be created and scheduled as if they did not use GPU,
......@@ -183,7 +178,7 @@ ExperimentConfig
* - useAnnotation
- ``bool``, optional
- Default: ``False``. Enable `annotation <../Tutorial/AnnotationSpec.rst>`__.
- Default: ``False``. Enable :doc:`annotation </hpo/nni_annotation>`.
When using annotation, ``searchSpace`` and ``searchSpaceFile`` should not be specified manually.
* - debug
......@@ -215,25 +210,25 @@ ExperimentConfig
* - tuner
- ``AlgorithmConfig``, optional
- Specify the tuner.
The built-in tuners can be found `here <../builtin_tuner.rst>`__ and you can follow `this tutorial <../Tuner/CustomizeTuner.rst>`__ to customize a new tuner.
The built-in tuners can be found :doc:`here </hpo/tuners>` and you can follow :doc:`this tutorial </hpo/custom_algorithm>` to customize a new tuner.
* - assessor
- ``AlgorithmConfig``, optional
- Specify the assessor.
The built-in assessors can be found `here <../builtin_assessor.rst>`__ and you can follow `this tutorial <../Assessor/CustomizeAssessor.rst>`__ to customize a new assessor.
The built-in assessors can be found :doc:`here </hpo/assessors>` and you can follow :doc:`this tutorial </hpo/custom_algorithm>` to customize a new assessor.
* - advisor
- ``AlgorithmConfig``, optional
- Specify the advisor.
NNI provides two built-in advisors: `BOHB <../Tuner/BohbAdvisor.rst>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__, and you can follow `this tutorial <../Tuner/CustomizeAdvisor.rst>`__ to customize a new advisor.
NNI provides two built-in advisors: :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>` and :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`.
* - trainingService
- ``TrainingServiceConfig``
- Specify the `training service <../TrainingService/Overview.rst>`__.
- Specify the :doc:`training service </experiment/training_service/overview>`.
* - sharedStorage
- ``SharedStorageConfig``, optional
- Configure the shared storage, detailed usage can be found `here <../Tutorial/HowToUseSharedStorage.rst>`__.
- Configure the shared storage, detailed usage can be found :doc:`here </experiment/training_service/shared_storage>`.
AlgorithmConfig
^^^^^^^^^^^^^^^
......@@ -286,8 +281,8 @@ One of the following:
- `AmlConfig`_
- `DlcConfig`_
- `HybridConfig`_
For `Kubeflow <../TrainingService/KubeflowMode.rst>`_, `FrameworkController <../TrainingService/FrameworkControllerMode.rst>`_, and `AdaptDL <../TrainingService/AdaptDLMode.rst>`_ training platforms, it is suggested to use `v1 config schema <../Tutorial/ExperimentConfig.rst>`_ for now.
- :doc:`FrameworkControllerConfig </experiment/training_service/frameworkcontroller>`
- :doc:`KubeflowConfig </experiment/training_service/kubeflow>`
.. _reference-local-config-label:
......@@ -437,7 +432,7 @@ RemoteMachineConfig
OpenpaiConfig
-------------
Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/openpai>`.
.. list-table::
:widths: 10 10 80
......@@ -509,7 +504,7 @@ Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
AmlConfig
---------
Detailed usage can be found `here <../TrainingService/AMLMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/aml>`.
.. list-table::
:widths: 10 10 80
......@@ -546,7 +541,7 @@ Detailed usage can be found `here <../TrainingService/AMLMode.rst>`__.
DlcConfig
---------
Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/paidlc>`.
.. list-table::
:widths: 10 10 80
......@@ -611,7 +606,7 @@ Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__.
HybridConfig
------------
Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `AmlConfig`_ . Detailed usage can be found `here <../TrainingService/HybridMode.rst>`__.
Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `AmlConfig`_ . Detailed usage can be found :doc:`here </experiment/training_service/hybrid>`.
SharedStorageConfig
^^^^^^^^^^^^^^^^^^^
......
......@@ -7,16 +7,15 @@ Comparison of Hyperparameter Optimization (HPO) algorithms on several problems.
Hyperparameter Optimization algorithms are list below:
* `Random Search <../Tuner/BuiltinTuner.rst>`__
* `Grid Search <../Tuner/BuiltinTuner.rst>`__
* `Evolution <../Tuner/BuiltinTuner.rst>`__
* `Anneal <../Tuner/BuiltinTuner.rst>`__
* `Metis <../Tuner/BuiltinTuner.rst>`__
* `TPE <../Tuner/BuiltinTuner.rst>`__
* `SMAC <../Tuner/BuiltinTuner.rst>`__
* `HyperBand <../Tuner/BuiltinTuner.rst>`__
* `BOHB <../Tuner/BuiltinTuner.rst>`__
* :doc:`Random Search </hpo/tuners>`
* :doc:`Grid Search </hpo/tuners>`
* :doc:`Evolution </hpo/tuners>`
* :doc:`Anneal </hpo/tuners>`
* :doc:`Metis </hpo/tuners>`
* :doc:`TPE </hpo/tuners>`
* :doc:`SMAC </hpo/tuners>`
* :doc:`HyperBand </hpo/tuners>`
* :doc:`BOHB </hpo/tuners>`
All algorithms run in NNI local environment.
......@@ -39,7 +38,7 @@ AutoGBDT Example
Problem Description
^^^^^^^^^^^^^^^^^^^
Nonconvex problem on the hyper-parameter search of `AutoGBDT <../TrialExample/GbdtExample.rst>`__ example.
Nonconvex problem on the hyper-parameter search of AutoGBDT example.
Search Space
^^^^^^^^^^^^
......
......@@ -48,8 +48,7 @@ In this example, the search space is specified by a ``search_space.json`` file a
Benchmark code
^^^^^^^^^^^^^^
Benchmark code should receive a configuration from NNI manager, and report the corresponding benchmark result back. Following NNI APIs are designed for this purpose. In this example, writing operations per second (OPS) is used as a performance metric. Please refer to `here <Trials.rst>`__ for detailed information.
Benchmark code should receive a configuration from NNI manager, and report the corresponding benchmark result back. Following NNI APIs are designed for this purpose. In this example, writing operations per second (OPS) is used as a performance metric.
* Use ``nni.get_next_parameter()`` to get next system configuration.
* Use ``nni.report_final_result(metric)`` to report the benchmark result.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment