Unverified Commit c8e0dd2b authored by liuzhe-lz's avatar liuzhe-lz Committed by GitHub
Browse files

Doc improvements (#4710)

parent d32e2994
:orphan:
Tabs Example
============
Please delete this file when it accomplishes its mission.
.. tabs::
.. tab:: Apples
Apples are green, or sometimes red.
.. tab:: Pears
Pears are green.
.. tab:: Oranges
Oranges are orange.
.. tabs::
.. group-tab:: Linux
Linux tab content - tab set 1
.. group-tab:: Mac OSX
Mac OSX tab content - tab set 1
.. group-tab:: Windows
Windows tab content - tab set 1
.. tabs::
.. group-tab:: Linux
Linux tab content - tab set 2
.. group-tab:: Mac OSX
Mac OSX tab content - tab set 2
.. group-tab:: Windows
Windows tab content - tab set 2
.. tabs::
.. code-tab:: c
C Main Function
.. code-tab:: c++
C++ Main Function
.. code-tab:: py
Python Main Function
.. code-tab:: java
Java Main Function
.. code-tab:: julia
Julia Main Function
.. code-tab:: fortran
Fortran Main Function
.. code-tab:: r R
R Main Function
.. tabs::
.. code-tab:: c
int main(const int argc, const char **argv) {
return 0;
}
.. code-tab:: c++
int main(const int argc, const char **argv) {
return 0;
}
.. code-tab:: py
def main():
return
.. code-tab:: java
class Main {
public static void main(String[] args) {
}
}
.. code-tab:: julia
function main()
end
.. code-tab:: fortran
PROGRAM main
END PROGRAM main
.. code-tab:: r R
main <- function() {
return(0)
}
\ No newline at end of file
...@@ -38,11 +38,8 @@ Built-in Assessors ...@@ -38,11 +38,8 @@ Built-in Assessors
* - Assessor * - Assessor
- Brief Introduction of Algorithm - Brief Introduction of Algorithm
* - :class:`Medianstop <nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor>` * - :class:`Median Stop <nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor>`
- It stops a pending trial X at step S if - Stop if the hyperparameter set performs worse than median at any step.
the trial’s best objective value by step S is strictly worse than the median value of
the running averages of all completed trials’ objectives reported up to step S. * - :class:`Curve Fitting <nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor>`
- Stop if the learning curve will likely converge to suboptimal result.
* - :class:`Curvefitting <nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor>`
- It stops a pending trial X at step S if
the trial’s forecast result at target step is convergence and lower than the best performance in the history.
Customizing Algorithms
======================
Customize Tuner Customize Tuner
=============== ---------------
NNI provides state-of-the-art tuning algorithm in builtin-tuners. NNI supports to build a tuner by yourself for tuning demand. NNI provides state-of-the-art tuning algorithm in builtin-tuners. NNI supports to build a tuner by yourself for tuning demand.
...@@ -125,7 +128,7 @@ Write a more advanced automl algorithm ...@@ -125,7 +128,7 @@ Write a more advanced automl algorithm
The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`msg_dispatcher_base.py <nni/runtime/msg_dispatcher_base.py>`. Please refer to `here <CustomizeAdvisor.rst>`__ for how to write a customized advisor. The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called ``advisor`` which directly inherits from ``MsgDispatcherBase`` in :githublink:`msg_dispatcher_base.py <nni/runtime/msg_dispatcher_base.py>`. Please refer to `here <CustomizeAdvisor.rst>`__ for how to write a customized advisor.
Customize Assessor Customize Assessor
================== ------------------
NNI supports to build an assessor by yourself for tuning demand. NNI supports to build an assessor by yourself for tuning demand.
......
How to register customized algorithms as builtin tuners, assessors and advisors
**How to register customized algorithms as builtin tuners, assessors and advisors** ===============================================================================
=======================================================================================
.. contents::
Overview Overview
-------- --------
NNI provides a lot of `builtin tuners <../Tuner/BuiltinTuner.rst>`_, `advisors <../Tuner/HyperbandAdvisor.rst>`__ and `assessors <../Assessor/BuiltinAssessor.rst>`__ can be used directly for Hyper Parameter Optimization, and some extra algorithms can be registered via ``nnictl algo register --meta <path_to_meta_file>`` after NNI is installed. You can check builtin algorithms via ``nnictl algo list`` command. NNI provides a lot of :doc:`builtin tuners <tuners>`, and :doc:`assessors <assessors>` can be used directly for Hyper Parameter Optimization, and some extra algorithms can be registered via ``nnictl algo register --meta <path_to_meta_file>`` after NNI is installed. You can check builtin algorithms via ``nnictl algo list`` command.
NNI also provides the ability to build your own customized tuners, advisors and assessors. To use the customized algorithm, users can simply follow the spec in experiment config file to properly reference the algorithm, which has been illustrated in the tutorials of `customized tuners <../Tuner/CustomizeTuner.rst>`_ / `advisors <../Tuner/CustomizeAdvisor.rst>`__ / `assessors <../Assessor/CustomizeAssessor.rst>`__. NNI also provides the ability to build your own customized tuners, advisors and assessors. To use the customized algorithm, users can simply follow the spec in experiment config file to properly reference the algorithm, which has been illustrated in the tutorials of :doc:`customized algorithms <custom_algorithm>`.
NNI also allows users to install the customized algorithm as a builtin algorithm, in order for users to use the algorithm in the same way as NNI builtin tuners/advisors/assessors. More importantly, it becomes much easier for users to share or distribute their implemented algorithm to others. Customized tuners/advisors/assessors can be installed into NNI as builtin algorithms, once they are installed into NNI, you can use your customized algorithms the same way as builtin tuners/advisors/assessors in your experiment configuration file. For example, you built a customized tuner and installed it into NNI using a builtin name ``mytuner``, then you can use this tuner in your configuration file like below: NNI also allows users to install the customized algorithm as a builtin algorithm, in order for users to use the algorithm in the same way as NNI builtin tuners/advisors/assessors. More importantly, it becomes much easier for users to share or distribute their implemented algorithm to others. Customized tuners/advisors/assessors can be installed into NNI as builtin algorithms, once they are installed into NNI, you can use your customized algorithms the same way as builtin tuners/advisors/assessors in your experiment configuration file. For example, you built a customized tuner and installed it into NNI using a builtin name ``mytuner``, then you can use this tuner in your configuration file like below:
...@@ -18,19 +15,15 @@ NNI also allows users to install the customized algorithm as a builtin algorithm ...@@ -18,19 +15,15 @@ NNI also allows users to install the customized algorithm as a builtin algorithm
tuner: tuner:
builtinTunerName: mytuner builtinTunerName: mytuner
Register customized algorithms as builtin tuners, assessors and advisors Register customized algorithms like builtin tuners, assessors and advisors
------------------------------------------------------------------------ --------------------------------------------------------------------------
You can follow below steps to build a customized tuner/assessor/advisor, and register it into NNI as builtin algorithm. You can follow below steps to build a customized tuner/assessor/advisor, and register it into NNI as builtin algorithm.
1. Create a customized tuner/assessor/advisor 1. Create a customized tuner/assessor/advisor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Reference following instructions to create: Reference following instruction: :doc:`custom_algorithm`
* `customized tuner <../Tuner/CustomizeTuner.rst>`_
* `customized assessor <../Assessor/CustomizeAssessor.rst>`_
* `customized advisor <../Tuner/CustomizeAdvisor.rst>`_
2. (Optional) Create a validator to validate classArgs 2. (Optional) Create a validator to validate classArgs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...@@ -166,7 +159,8 @@ For example: ...@@ -166,7 +159,8 @@ For example:
Porting customized algorithms from v1.x to v2.x Porting customized algorithms from v1.x to v2.x
----------------------------------------------- -----------------------------------------------
All that needs to be modified is to delete ``NNI Package :: tuner`` metadata in ``setup.py`` and add a meta file mentioned in `4. Prepare meta file`_. Then you can follow `Register customized algorithms as builtin tuners, assessors and advisors`_ to register your customized algorithms. All that needs to be modified is to delete ``NNI Package :: tuner`` metadata in ``setup.py`` and add a meta file mentioned in `4. Prepare meta file`_.
Then you can follow `Register customized algorithms like builtin tuners, assessors and advisors`_ to register your customized algorithms.
Example: Register a customized tuner as a builtin tuner Example: Register a customized tuner as a builtin tuner
------------------------------------------------------- -------------------------------------------------------
......
...@@ -21,5 +21,4 @@ Auto hyperparameter optimization (HPO), or auto tuning, is one of the key featur ...@@ -21,5 +21,4 @@ Auto hyperparameter optimization (HPO), or auto tuning, is one of the key featur
Search Space <search_space> Search Space <search_space>
Tuners <tuners> Tuners <tuners>
Assessors <assessors> Assessors <assessors>
TensorBoard Integration <tensorboard>
Advanced Usage <advanced_toctree.rst> Advanced Usage <advanced_toctree.rst>
...@@ -34,15 +34,17 @@ Following code snippet demonstrates a naive HPO process: ...@@ -34,15 +34,17 @@ Following code snippet demonstrates a naive HPO process:
You may have noticed, the example will train 4×10×3=120 models in total. You may have noticed, the example will train 4×10×3=120 models in total.
Since it consumes so much computing resources, you may want to: Since it consumes so much computing resources, you may want to:
1. Find the best set of hyperparameters with less iterations. 1. :ref:`Find the best hyperparameter set with less iterations. <hpo-overview-tuners>`
2. Train the models on distributed platforms. 2. :ref:`Train the models on distributed platforms. <hpo-overview-platforms>`
3. Have a portal to monitor and control the process. 3. :ref:`Have a portal to monitor and control the process. <hpo-overview-portal>`
And NNI will do them for you. NNI will do them for you.
Key Features of NNI HPO Key Features of NNI HPO
----------------------- -----------------------
.. _hpo-overview-tuners:
Tuning Algorithms Tuning Algorithms
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
...@@ -61,6 +63,8 @@ RL based algorithms like PPO, and much more. ...@@ -61,6 +63,8 @@ RL based algorithms like PPO, and much more.
Main article: :doc:`tuners` Main article: :doc:`tuners`
.. _hpo-overview-platforms:
Training Platforms Training Platforms
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
...@@ -75,6 +79,8 @@ Kubernetes-based clusters, AzureML service, and much more. ...@@ -75,6 +79,8 @@ Kubernetes-based clusters, AzureML service, and much more.
Main article: :doc:`/experiment/training_service` Main article: :doc:`/experiment/training_service`
.. _hpo-overview-portal:
Web Portal Web Portal
^^^^^^^^^^ ^^^^^^^^^^
...@@ -101,96 +107,6 @@ After you are familiar with basic usage, you can explore more HPO features: ...@@ -101,96 +107,6 @@ After you are familiar with basic usage, you can explore more HPO features:
* :doc:`Use command line tool to create and manage experiments (nnictl) </reference/nnictl>` * :doc:`Use command line tool to create and manage experiments (nnictl) </reference/nnictl>`
* :doc:`Early stop non-optimal models (assessor) <assessors>` * :doc:`Early stop non-optimal models (assessor) <assessors>`
* :doc:`TensorBoard integration <tensorboard>` * :doc:`TensorBoard integration </experiment/tensorboard>`
* :doc:`Implement your own algorithm <custom_algorithm>` * :doc:`Implement your own algorithm <custom_algorithm>`
* :doc:`Benchmark tuners <hpo_benchmark>` * :doc:`Benchmark tuners <hpo_benchmark>`
Built-in Algorithms
-------------------
Tuning Algorithms
^^^^^^^^^^^^^^^^^
Main article: :doc:`tuners`
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Category
- Brief Description
* - :class:`Random <nni.algorithms.hpo.random_tuner.RandomTuner>`
- Basic
- Naive random search.
* - :class:`GridSearch <nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>`
- Basic
- Brute-force search.
* - :class:`TPE <nni.algorithms.hpo.tpe_tuner.TpeTuner>`
- Bayesian
- Tree-structured Parzen Estimator.
* - :class:`Anneal <nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>`
- Classic
- Simulated annealing algorithm.
* - :class:`Evolution <nni.algorithms.hpo.evolution_tuner.EvolutionTuner>`
- Classic
- Naive evolution algorithm.
* - :class:`SMAC <nni.algorithms.hpo.smac_tuner.SMACTuner>`
- Bayesian
- Sequential Model-based optimization for general Algorithm Configuration.
* - :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`
- Advanced
- Evaluate more hyperparameter sets by adaptively allocating resources.
* - :class:`MetisTuner <nni.algorithms.hpo.metis_tuner.MetisTuner>`
- Bayesian
- Robustly optimizing tail latencies of cloud systems.
* - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>`
- Advanced
- Bayesian Optimization with HyperBand.
* - :class:`GPTuner <nni.algorithms.hpo.gp_tuner.GPTuner>`
- Bayesian
- Gaussian Process.
* - :class:`PBTTuner <nni.algorithms.hpo.pbt_tuner.PBTTuner>`
- Advanced
- Population Based Training of neural networks.
* - :class:`DNGOTuner <nni.algorithms.hpo.dngo_tuner.DNGOTuner>`
- Bayesian
- Deep Networks for Global Optimization.
* - :class:`PPOTuner <nni.algorithms.hpo.ppo_tuner.PPOTuner>`
- RL
- Proximal Policy Optimization.
* - :class:`BatchTuner <nni.algorithms.hpo.batch_tuner.BatchTuner>`
- Basic
- Manually specify hyperparameter sets.
Early Stopping
^^^^^^^^^^^^^^
Main article: :doc:`assessors`
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Brief Description
* - :class:`Medianstop <nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor>`
- Stop if the hyperparameter set performs worse than median at any step.
* - :class:`Curvefitting <nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor>`
- Stop if the learning curve will likely converge to suboptimal result.
.. 9b97cac44e07efbd393a2ab21f247c95
超参调优
========
自动超参调优(hyperparameter optimization, HPO)是NNI的主要功能之一。
超参调优简介
------------
在机器学习中,用来控制学习过程的参数被称为“超参数”或“超参”,而为一种机器学习算法选择最优超参组合的问题被称为“超参调优”。
以下代码片段演示了一次朴素的超参调优:
.. code-block:: python
best_hyperparameters = None
best_accuracy = 0
for learning_rate in [0.1, 0.01, 0.001, 0.0001]:
for momentum in [i / 10 for i in range(10)]:
for activation_type in ['relu', 'tanh', 'sigmoid']:
model = build_model(activation_type)
train_model(model, learning_rate, momentum)
accuracy = evaluate_model(model)
if accuracy > best_accuracy:
best_accuracy = accuracy
best_hyperparameters = (learning_rate, momentum, activation_type)
print('最优超参:', best_hyperparameters)
可以看到,这段超参调优代码总计训练4×10×3=120个模型,要消耗大量的计算资源,因此您可能会有以下需求:
1. :ref:`通过较少的尝试次数找到最优超参组合 <zh-hpo-overview-tuners>`
2. :ref:`利用分布式平台进行训练 <zh-hpo-overview-platforms>`
3. :ref:`使用网页控制台来监控调参过程 <zh-hpo-overview-portal>`
NNI可以满足您的这些需求。
NNI超参调优的主要功能
---------------------
.. _zh-hpo-overview-tuners:
调优算法
^^^^^^^^
NNI通过调优算法来更快地找到最优超参组合,这些算法被称为“tuner”(调参器)。
调优算法会决定需要运行、评估哪些超参组合,以及应该以何种顺序评估超参组合。
高效的算法可以通过已评估超参组合的结果去预测最优超参的取值,从而减少找到最优超参所需的评估次数。
开头的示例以固定顺序评估所有可能的超参组合,无视了超参的评估结果,这种朴素方法被称为“grid search”(网格搜索)。
NNI内建了很多流行的调优算法,包括朴素算法如随机搜索、网格搜索,贝叶斯优化类算法如TPESMAC,强化学习算法如PPO等等。
完整内容: :doc:`tuners`
.. _zh-hpo-overview-platforms:
训练平台
^^^^^^^^
如果您不准备使用分布式训练平台,您可以像使用普通Python函数库一样,在自己的电脑上直接运行NNI超参调优。
如果想利用更多计算资源加速调优过程,您也可以使用NNI内建的训练平台集成,从简单的SSH服务器到可扩容的Kubernetes集群NNI都提供支持。
完整内容: :doc:`/experiment/training_service`
.. _zh-hpo-overview-portal:
网页控制台
^^^^^^^^^^
您可以使用NNI的网页控制台来监控超参调优实验,它支持实时显示实验进度、对超参性能进行可视化、人工修改超参数值、同时管理多个实验等诸多功能。
完整内容: :doc:`/experiment/web_portal`
.. image:: ../../static/img/webui.gif
:width: 100%
教程
----
我们提供了以下教程帮助您上手NNI超参调优,您可以选择最熟悉的机器学习框架:
* :doc:`使用PyTorch的超参调优教程 </tutorials/hpo_quickstart_pytorch/main>`
* :doc:`使用TensorFlow的超参调优教程 </tutorials/hpo_quickstart_tensorflow/main>`
更多功能
--------
在掌握了NNI超参调优的基础用法之后,您可以尝试以下更多功能:
* :doc:`Use command line tool to create and manage experiments (nnictl) </reference/nnictl>`
* :doc:`Early stop non-optimal models (assessor) <assessors>`
* :doc:`TensorBoard integration </experiment/tensorboard>`
* :doc:`Implement your own algorithm <custom_algorithm>`
* :doc:`Benchmark tuners <hpo_benchmark>`
.. role:: raw-html(raw)
:format: html
Search Space Search Space
============ ============
Overview Overview
-------- --------
In NNI, tuner will sample parameters/architectures according to the search space. In NNI, tuner will sample hyperparameters according to the search space.
To define a search space, users should define the name of the variable, the type of sampling strategy and its parameters. To define a search space, users should define the name of the variable, the type of sampling strategy and its parameters.
* An example of a search space definition in a JSON file is as follow: * An example of a search space definition in JSON format is as follow:
.. code-block:: json .. code-block:: json
...@@ -23,87 +20,122 @@ To define a search space, users should define the name of the variable, the type ...@@ -23,87 +20,122 @@ To define a search space, users should define the name of the variable, the type
"learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]} "learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]}
} }
Take the first line as an example. ``dropout_rate`` is defined as a variable whose prior distribution is a uniform distribution with a range from ``0.1`` to ``0.5``. Take the first line as an example.
``dropout_rate`` is defined as a variable whose prior distribution is a uniform distribution with a range from ``0.1`` to ``0.5``.
.. attention::
.. note:: In the `experiment configuration (V2) schema <ExperimentConfig.rst>`_, NNI supports defining the search space directly in the configuration file, detailed usage can be found `here <QuickStart.rst#step-2-define-the-search-space>`__. When using Python API, users can write the search space in the Python file, refer `here <HowToLaunchFromPython.rst>`__. The available sampling strategies within a search space depend on the tuner you want to use.
We list the supported types for each built-in tuner :ref:`below <hpo-space-support>`.
Note that the available sampling strategies within a search space depend on the tuner you want to use. We list the supported types for each builtin tuner below. For a customized tuner, you don't have to follow our convention and you will have the flexibility to define any type you want. For a customized tuner, you don't have to follow our convention and you will have the flexibility to define any type you want.
Types Types
----- -----
All types of sampling strategies and their parameter are listed here: All types of sampling strategies and their parameter are listed here:
choice
^^^^^^
* .. code-block:: python
``{"_type": "choice", "_value": options}``
{"_type": "choice", "_value": options}
* The variable's value is one of the options. Here ``options`` should be a list of **numbers** or a list of **strings**. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors. * The variable's value is one of the options. Here ``options`` should be a list of **numbers** or a list of **strings**. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors.
* ``options`` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple :githublink:`example of nested search space definition <examples/trials/mnist-nested-search-space/search_space.json>`. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a ``_name`` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a :githublink:`sample <examples/trials/mnist-nested-search-space/sample.json>` which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces. * ``options`` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple :githublink:`example of nested search space definition <examples/trials/mnist-nested-search-space/search_space.json>`. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a ``_name`` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a :githublink:`sample <examples/trials/mnist-nested-search-space/sample.json>` which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces.
* randint
``{"_type": "randint", "_value": [lower, upper]}`` ^^^^^^^
.. code-block:: python
* Choosing a random integer between ``lower`` (inclusive) and ``upper`` (exclusive). {"_type": "randint", "_value": [lower, upper]}
* Note: Different tuners may interpret ``randint`` differently. Some (e.g., TPE, GridSearch) treat integers from lower
* Choosing a random integer between ``lower`` (inclusive) and ``upper`` (exclusive).
* Note: Different tuners may interpret ``randint`` differently. Some (e.g., TPE, GridSearch) treat integers from lower
to upper as unordered ones, while others respect the ordering (e.g., SMAC). If you want all the tuners to respect to upper as unordered ones, while others respect the ordering (e.g., SMAC). If you want all the tuners to respect
the ordering, please use ``quniform`` with ``q=1``. the ordering, please use ``quniform`` with ``q=1``.
* uniform
``{"_type": "uniform", "_value": [low, high]}`` ^^^^^^^
.. code-block:: python
{"_type": "uniform", "_value": [low, high]}
* The variable value is uniformly sampled between low and high.
* When optimizing, this variable is constrained to a two-sided interval.
quniform
^^^^^^^^
.. code-block:: python
* The variable value is uniformly sampled between low and high. {"_type": "quniform", "_value": [low, high, q]}
* When optimizing, this variable is constrained to a two-sided interval.
* * The variable value is determined using ``clip(round(uniform(low, high) / q) * q, low, high)``\ , where the clip operation is used to constrain the generated value within the bounds. For example, for ``_value`` specified as [0, 10, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0]; For ``_value`` specified as [2, 10, 5], possible values are [2, 5, 10].
``{"_type": "quniform", "_value": [low, high, q]}`` * Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose an integer from a range [low, high], you can write ``_value`` like this: ``[low, high, 1]``.
loguniform
^^^^^^^^^^
* The variable value is determined using ``clip(round(uniform(low, high) / q) * q, low, high)``\ , where the clip operation is used to constrain the generated value within the bounds. For example, for ``_value`` specified as [0, 10, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0]; For ``_value`` specified as [2, 10, 5], possible values are [2, 5, 10]. .. code-block:: python
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose an integer from a range [low, high], you can write ``_value`` like this: ``[low, high, 1]``.
* {"_type": "loguniform", "_value": [low, high]}
``{"_type": "loguniform", "_value": [low, high]}``
* The variable value is drawn from a range [low, high] according to a loguniform distribution like exp(uniform(log(low), log(high))), so that the logarithm of the return value is uniformly distributed.
* When optimizing, this variable is constrained to be positive.
* The variable value is drawn from a range [low, high] according to a loguniform distribution like exp(uniform(log(low), log(high))), so that the logarithm of the return value is uniformly distributed. qloguniform
* When optimizing, this variable is constrained to be positive. ^^^^^^^^^^^
* .. code-block:: python
``{"_type": "qloguniform", "_value": [low, high, q]}``
{"_type": "qloguniform", "_value": [low, high, q]}
* The variable value is determined using ``clip(round(loguniform(low, high) / q) * q, low, high)``\ , where the clip operation is used to constrain the generated value within the bounds. * The variable value is determined using ``clip(round(loguniform(low, high) / q) * q, low, high)``\ , where the clip operation is used to constrain the generated value within the bounds.
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below. * Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
* normal
``{"_type": "normal", "_value": [mu, sigma]}`` ^^^^^^
.. code-block:: python
* The variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable. {"_type": "normal", "_value": [mu, sigma]}
* * The variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.
``{"_type": "qnormal", "_value": [mu, sigma, q]}``
qnormal
^^^^^^^
* The variable value is determined using ``round(normal(mu, sigma) / q) * q`` .. code-block:: python
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
* {"_type": "qnormal", "_value": [mu, sigma, q]}
``{"_type": "lognormal", "_value": [mu, sigma]}``
* The variable value is determined using ``round(normal(mu, sigma) / q) * q``
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
* The variable value is drawn according to ``exp(normal(mu, sigma))`` so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive. lognormal
^^^^^^^^^
* .. code-block:: python
``{"_type": "qlognormal", "_value": [mu, sigma, q]}``
{"_type": "lognormal", "_value": [mu, sigma]}
* The variable value is determined using ``round(exp(normal(mu, sigma)) / q) * q`` * The variable value is drawn according to ``exp(normal(mu, sigma))`` so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
* Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.
qlognormal
^^^^^^^^^^
.. code-block:: python
{"_type": "qlognormal", "_value": [mu, sigma, q]}
* The variable value is determined using ``round(exp(normal(mu, sigma)) / q) * q``
* Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.
.. _hpo-space-support:
Search Space Types Supported by Each Tuner Search Space Types Supported by Each Tuner
------------------------------------------ ------------------------------------------
...@@ -124,68 +156,87 @@ Search Space Types Supported by Each Tuner ...@@ -124,68 +156,87 @@ Search Space Types Supported by Each Tuner
- qnormal - qnormal
- lognormal - lognormal
- qlognormal - qlognormal
* - TPE Tuner
- :raw-html:`&#10003;` * - :class:`TPE <nni.algorithms.hpo.tpe_tuner.TpeTuner>`
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
* - Random Search Tuner - ✓
- :raw-html:`&#10003;`
- :raw-html:`&#10003;` * - :class:`Random <nni.algorithms.hpo.random_tuner.RandomTuner>`
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
* - Anneal Tuner - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;`
- :raw-html:`&#10003;` * - :class:`Grid Search <nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>`
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
* - Evolution Tuner - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;`
- :raw-html:`&#10003;` * - :class:`Anneal <nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>`
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
* - SMAC Tuner - ✓
- :raw-html:`&#10003;` - ✓
- ✓
- ✓
* - :class:`Evolution <nni.algorithms.hpo.evolution_tuner.EvolutionTuner>`
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`SMAC <nni.algorithms.hpo.smac_tuner.SMACTuner>`
- ✓
- -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- -
- -
- -
- -
- -
* - Batch Tuner
- :raw-html:`&#10003;` * - :class:`Batch <nni.algorithms.hpo.batch_tuner.BatchTuner>`
- ✓
- -
- -
- -
...@@ -196,62 +247,80 @@ Search Space Types Supported by Each Tuner ...@@ -196,62 +247,80 @@ Search Space Types Supported by Each Tuner
- -
- -
- -
* - Grid Search Tuner
- :raw-html:`&#10003;` * - :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
* - Hyperband Advisor
- :raw-html:`&#10003;`
- -
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
- :raw-html:`&#10003;` - ✓
* - Metis Tuner
- :raw-html:`&#10003;` * - :class:`Metis <nni.algorithms.hpo.metis_tuner.MetisTuner>`
- ✓
- -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- -
- -
- -
- -
- -
- -
* - GP Tuner
- :raw-html:`&#10003;` * - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>`
- choice
- choice(nested)
- randint
- uniform
- quniform
- loguniform
- qloguniform
- normal
- qnormal
- lognormal
- qlognormal
* - :class:`GP <nni.algorithms.hpo.gp_tuner.GPTuner>`
- ✓
- -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- -
- -
- -
- -
* - DNGO Tuner
- :raw-html:`&#10003;` * - :class:`PBT <nni.algorithms.hpo.pbt_tuner.PBTTuner>`
- choice
- choice(nested)
- randint
- uniform
- quniform
- loguniform
- qloguniform
- normal
- qnormal
- lognormal
- qlognormal
* - :class:`DNGO <nni.algorithms.hpo.dngo_tuner.DNGOTuner>`
- ✓
- -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- :raw-html:`&#10003;` -
- -
- -
- -
...@@ -260,12 +329,12 @@ Search Space Types Supported by Each Tuner ...@@ -260,12 +329,12 @@ Search Space Types Supported by Each Tuner
Known Limitations: Known Limitations:
* GP Tuner, Metis Tuner and DNGO tuner support only **numerical values** in search space
(``choice`` type values can be no-numerical with other tuners, e.g. string values).
Both GP Tuner and Metis Tuner use Gaussian Process Regressor(GPR).
GPR make predictions based on a kernel function and the 'distance' between different points,
it's hard to get the true distance between no-numerical values.
* * Note that for nested search space:
GP Tuner, Metis Tuner and DNGO tuner support only **numerical values** in search space (\ ``choice`` type values can be no-numerical with other tuners, e.g. string values). Both GP Tuner and Metis Tuner use Gaussian Process Regressor(GPR). GPR make predictions based on a kernel function and the 'distance' between different points, it's hard to get the true distance between no-numerical values.
*
Note that for nested search space:
* Only Random Search/TPE/Anneal/Evolution/Grid Search tuner supports nested search space * Only TPE/Random/Grid Search/Anneal/Evolution tuners support nested search space.
How to Use Tensorboard within WebUI
===================================
You can launch a tensorboard process cross one or multi trials within webui since NNI v2.2. This feature supports local training service and reuse mode training service with shared storage for now, and will support more scenarios in later nni version.
Preparation
-----------
Make sure tensorboard installed in your environment. If you never used tensorboard, here are getting start tutorials for your reference, `tensorboard with tensorflow <https://www.tensorflow.org/tensorboard/get_started>`__, `tensorboard with pytorch <https://pytorch.org/tutorials/recipes/recipes/tensorboard_with_pytorch.html>`__.
Use WebUI Launch Tensorboard
----------------------------
1. Save Logs
^^^^^^^^^^^^
NNI will automatically fetch the ``tensorboard`` subfolder under trial's output folder as tensorboard logdir. So in trial's source code, you need to save the tensorboard logs under ``NNI_OUTPUT_DIR/tensorboard``. This log path can be joined as:
.. code-block:: python
log_dir = os.path.join(os.environ["NNI_OUTPUT_DIR"], 'tensorboard')
2. Launch Tensorboard
^^^^^^^^^^^^^^^^^^^^^
Like compare, select the trials you want to combine to launch the tensorboard at first, then click the ``Tensorboard`` button.
.. image:: ../../img/Tensorboard_1.png
:target: ../../img/Tensorboard_1.png
:alt:
After click the ``OK`` button in the pop-up box, you will jump to the tensorboard portal.
.. image:: ../../img/Tensorboard_2.png
:target: ../../img/Tensorboard_2.png
:alt:
You can see the ``SequenceID-TrialID`` on the tensorboard portal.
.. image:: ../../img/Tensorboard_3.png
:target: ../../img/Tensorboard_3.png
:alt:
3. Stop All
^^^^^^^^^^^^
If you want to open the portal you have already launched, click the tensorboard id. If you don't need the tensorboard anymore, click ``Stop all tensorboard`` button.
.. image:: ../../img/Tensorboard_4.png
:target: ../../img/Tensorboard_4.png
:alt:
...@@ -49,9 +49,11 @@ Built-in Tuners ...@@ -49,9 +49,11 @@ Built-in Tuners
:widths: auto :widths: auto
* - Tuner * - Tuner
- Category
- Brief Introduction - Brief Introduction
* - :class:`TPE <nni.algorithms.hpo.tpe_tuner.TpeTuner>` * - :class:`TPE <nni.algorithms.hpo.tpe_tuner.TpeTuner>`
- Bayesian
- Tree-structured Parzen Estimator, a classic Bayesian optimization algorithm. - Tree-structured Parzen Estimator, a classic Bayesian optimization algorithm.
(`paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__) (`paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__)
...@@ -61,44 +63,56 @@ Built-in Tuners ...@@ -61,44 +63,56 @@ Built-in Tuners
The drawback is that TPE cannot discover relationship between different hyperparameters. The drawback is that TPE cannot discover relationship between different hyperparameters.
* - :class:`Random <nni.algorithms.hpo.random_tuner.RandomTuner>` * - :class:`Random <nni.algorithms.hpo.random_tuner.RandomTuner>`
- Basic
- Naive random search, the baseline. It supports all search space types. - Naive random search, the baseline. It supports all search space types.
* - :class:`Grid Search <nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>` * - :class:`Grid Search <nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>`
- Basic
- Divides search space into evenly spaced grid, and performs brute-force traverse. Another baseline. - Divides search space into evenly spaced grid, and performs brute-force traverse. Another baseline.
It supports all search space types. It supports all search space types.
Recommended when the search space is small, and when you want to find the strictly optimal hyperparameters. Recommended when the search space is small, and when you want to find the strictly optimal hyperparameters.
* - :class:`Anneal <nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>` * - :class:`Anneal <nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>`
- Heuristic
- This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive. - This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive.
* - :class:`Evolution <nni.algorithms.hpo.evolution_tuner.EvolutionTuner>` * - :class:`Evolution <nni.algorithms.hpo.evolution_tuner.EvolutionTuner>`
- Heuristic
- Naive Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to work, but it's very simple and easy to expand new features. `Reference paper <https://arxiv.org/pdf/1703.01041.pdf>`__ - Naive Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to work, but it's very simple and easy to expand new features. `Reference paper <https://arxiv.org/pdf/1703.01041.pdf>`__
* - :class:`SMAC <nni.algorithms.hpo.smac_tuner.SMACTuner>` * - :class:`SMAC <nni.algorithms.hpo.smac_tuner.SMACTuner>`
- Bayesian
- SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo. - SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo.
Notice, SMAC needs to be installed by ``pip install nni[SMAC]`` command. `Reference Paper, <https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf>`__ `GitHub Repo <https://github.com/automl/SMAC3>`__ Notice, SMAC needs to be installed by ``pip install nni[SMAC]`` command. `Reference Paper, <https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf>`__ `GitHub Repo <https://github.com/automl/SMAC3>`__
* - :class:`Batch <nni.algorithms.hpo.batch_tuner.BatchTuner>` * - :class:`Batch <nni.algorithms.hpo.batch_tuner.BatchTuner>`
- Basic
- Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec. - Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec.
* - :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>` * - :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`
- Heuristic
- Hyperband tries to use limited resources to explore as many configurations as possible and returns the most promising ones as a final result. The basic idea is to generate many configurations and run them for a small number of trials. The half least-promising configurations are thrown out, the remaining are further trained along with a selection of new configurations. The size of these populations is sensitive to resource constraints (e.g. allotted search time). `Reference Paper <https://arxiv.org/pdf/1603.06560.pdf>`__ - Hyperband tries to use limited resources to explore as many configurations as possible and returns the most promising ones as a final result. The basic idea is to generate many configurations and run them for a small number of trials. The half least-promising configurations are thrown out, the remaining are further trained along with a selection of new configurations. The size of these populations is sensitive to resource constraints (e.g. allotted search time). `Reference Paper <https://arxiv.org/pdf/1603.06560.pdf>`__
* - :class:`Metis <nni.algorithms.hpo.metis_tuner.MetisTuner>` * - :class:`Metis <nni.algorithms.hpo.metis_tuner.MetisTuner>`
- Bayesian
- Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. `Reference Paper <https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/>`__ - Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. `Reference Paper <https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/>`__
* - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>` * - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>`
- Bayesian
- BOHB is a follow-up work to Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Bayesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. `Reference Paper <https://arxiv.org/abs/1807.01774>`__ - BOHB is a follow-up work to Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Bayesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. `Reference Paper <https://arxiv.org/abs/1807.01774>`__
* - :class:`GP <nni.algorithms.hpo.gp_tuner.GPTuner>` * - :class:`GP <nni.algorithms.hpo.gp_tuner.GPTuner>`
- Bayesian
- Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. `Reference Paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__, `Github Repo <https://github.com/fmfn/BayesianOptimization>`__ - Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. `Reference Paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__, `Github Repo <https://github.com/fmfn/BayesianOptimization>`__
* - :class:`PBT <nni.algorithms.hpo.pbt_tuner.PBTTuner>` * - :class:`PBT <nni.algorithms.hpo.pbt_tuner.PBTTuner>`
- Heuristic
- PBT Tuner is a simple asynchronous optimization algorithm which effectively utilizes a fixed computational budget to jointly optimize a population of models and their hyperparameters to maximize performance. `Reference Paper <https://arxiv.org/abs/1711.09846v1>`__ - PBT Tuner is a simple asynchronous optimization algorithm which effectively utilizes a fixed computational budget to jointly optimize a population of models and their hyperparameters to maximize performance. `Reference Paper <https://arxiv.org/abs/1711.09846v1>`__
* - :class:`DNGO <nni.algorithms.hpo.dngo_tuner.DNGOTuner>` * - :class:`DNGO <nni.algorithms.hpo.dngo_tuner.DNGOTuner>`
- Bayesian
- Use of neural networks as an alternative to GPs to model distributions over functions in bayesian optimization. - Use of neural networks as an alternative to GPs to model distributions over functions in bayesian optimization.
Comparison Comparison
......
...@@ -13,7 +13,7 @@ Neural Network Intelligence ...@@ -13,7 +13,7 @@ Neural Network Intelligence
入门 <quickstart> 入门 <quickstart>
安装 <installation> 安装 <installation>
教程<examples> 教程<examples>
自动(超参数)调优 <hpo/index> 超参调优 <hpo/index>
神经网络架构搜索<nas/index> 神经网络架构搜索<nas/index>
模型压缩<compression/index> 模型压缩<compression/index>
特征工程<feature_engineering/index> 特征工程<feature_engineering/index>
......
...@@ -5,22 +5,36 @@ NNI requires Python >= 3.7. ...@@ -5,22 +5,36 @@ NNI requires Python >= 3.7.
It is tested and supported on Ubuntu >= 18.04, It is tested and supported on Ubuntu >= 18.04,
Windows 10 >= 21H2, and macOS >= 11. Windows 10 >= 21H2, and macOS >= 11.
There are 3 ways to install NNI:
* :ref:`Using pip <installation-pip>`
* :ref:`Build source code <installation-source>`
* :ref:`Using Docker <installation-docker>`
.. _installation-pip:
Using pip Using pip
--------- ---------
NNI provides official packages for x86-64 CPUs. They can be installed with pip: NNI provides official packages for x86-64 CPUs. They can be installed with pip:
.. code-block:: .. code-block:: text
pip install nni
Or to upgrade to latest version:
python -m pip install --upgrade nni .. code-block:: text
pip install --latest nni
You can check installation with: You can check installation with:
.. code-block:: .. code-block:: text
nnictl --version nnictl --version
On Linux systems without Conda, you may encounter ``bash: nnictl: command not found``. On Linux systems without Conda, you may encounter ``bash: nnictl: command not found`` error.
In this case you need to add pip script directory to ``PATH``: In this case you need to add pip script directory to ``PATH``:
.. code-block:: bash .. code-block:: bash
...@@ -28,6 +42,8 @@ In this case you need to add pip script directory to ``PATH``: ...@@ -28,6 +42,8 @@ In this case you need to add pip script directory to ``PATH``:
echo 'export PATH=${PATH}:${HOME}/.local/bin' >> ~/.bashrc echo 'export PATH=${PATH}:${HOME}/.local/bin' >> ~/.bashrc
source ~/.bashrc source ~/.bashrc
.. _installation-source:
Installing from Source Code Installing from Source Code
--------------------------- ---------------------------
...@@ -38,12 +54,14 @@ It requires to install from source code. ...@@ -38,12 +54,14 @@ It requires to install from source code.
See :doc:`/notes/build_from_source`. See :doc:`/notes/build_from_source`.
.. _installation-docker:
Using Docker Using Docker
------------ ------------
NNI provides official Docker image on `Docker Hub <https://hub.docker.com/r/msranni/nni>`__. NNI provides official Docker image on `Docker Hub <https://hub.docker.com/r/msranni/nni>`__.
.. code-block:: .. code-block:: text
docker pull msranni/nni docker pull msranni/nni
...@@ -55,21 +73,21 @@ Use ``nni[<algorithm-name>]`` to install their dependencies. ...@@ -55,21 +73,21 @@ Use ``nni[<algorithm-name>]`` to install their dependencies.
For example, to install dependencies of :class:`DNGO tuner<nni.algorithms.hpo.dngo_tuner.DNGOTuner>` : For example, to install dependencies of :class:`DNGO tuner<nni.algorithms.hpo.dngo_tuner.DNGOTuner>` :
.. code-block:: .. code-block:: text
python -m pip install nni[DNGO] pip install nni[DNGO]
This command will not reinstall NNI itself, even if it was installed in development mode. This command will not reinstall NNI itself, even if it was installed in development mode.
Alternatively, you may install all extra dependencies at once: Alternatively, you may install all extra dependencies at once:
.. code-block:: .. code-block:: text
python -m pip install nni[all] pip install nni[all]
**NOTE**: SMAC tuner depends on swig3, which requires a manual downgrade on Ubuntu: **NOTE**: SMAC tuner depends on swig3, which requires a manual downgrade on Ubuntu:
.. code-block:: .. code-block:: bash
sudo apt install swig3.0 sudo apt install swig3.0
sudo rm /usr/bin/swig sudo rm /usr/bin/swig
......
...@@ -5,9 +5,9 @@ This article describes how to build and install NNI from `source code`_. ...@@ -5,9 +5,9 @@ This article describes how to build and install NNI from `source code`_.
We recommend using latest setuptools: We recommend using latest setuptools:
.. code-block:: .. code-block:: text
python -m pip install --upgrade setuptools pip wheel pip install --upgrade setuptools pip wheel
.. _source code: https://github.com/microsoft/nni .. _source code: https://github.com/microsoft/nni
...@@ -16,7 +16,7 @@ Development Build ...@@ -16,7 +16,7 @@ Development Build
If you want to build NNI for your own use, we recommend using `development mode`_. If you want to build NNI for your own use, we recommend using `development mode`_.
.. code-block:: .. code-block:: text
python setup.py develop python setup.py develop
...@@ -32,15 +32,15 @@ NNI does not support setuptools' "install" command. ...@@ -32,15 +32,15 @@ NNI does not support setuptools' "install" command.
A release package requires jupyterlab to build the extension: A release package requires jupyterlab to build the extension:
.. code-block:: .. code-block:: text
python -m pip install jupyterlab pip install jupyterlab
And you need to set ``NNI_RELEASE`` environment variable, and compile TypeScript modules before "bdist_wheel". And you need to set ``NNI_RELEASE`` environment variable, and compile TypeScript modules before "bdist_wheel".
In bash: In bash:
.. code-block:: .. code-block:: bash
export NNI_RELEASE=2.7 export NNI_RELEASE=2.7
python setup.py build_ts python setup.py build_ts
...@@ -48,7 +48,7 @@ In bash: ...@@ -48,7 +48,7 @@ In bash:
In PowerShell: In PowerShell:
.. code-block:: .. code-block:: powershell
$env:NNI_RELEASE=2.7 $env:NNI_RELEASE=2.7
python setup.py build_ts python setup.py build_ts
...@@ -69,9 +69,9 @@ If successful, you will find the wheel in ``dist`` directory. ...@@ -69,9 +69,9 @@ If successful, you will find the wheel in ``dist`` directory.
Build Docker Image Build Docker Image
------------------ ------------------
You can build a Docker image with Dockerfile: You can build a Docker image with :githublink:`Dockerfile <Dockerfile>`:
.. code-block:: .. code-block:: bash
export NNI_RELEASE=2.7 export NNI_RELEASE=2.7
python setup.py build_ts python setup.py build_ts
...@@ -88,7 +88,7 @@ Clean ...@@ -88,7 +88,7 @@ Clean
If the build fails, please clean up and try again: If the build fails, please clean up and try again:
.. code:: .. code:: text
python setup.py clean python setup.py clean
...@@ -99,6 +99,6 @@ This is useful when you have uninstalled NNI from development mode and want to i ...@@ -99,6 +99,6 @@ This is useful when you have uninstalled NNI from development mode and want to i
It will not work if you have never built TypeScript modules before. It will not work if you have never built TypeScript modules before.
.. code:: .. code:: text
python setup.py develop --skip-ts python setup.py develop --skip-ts
...@@ -17,7 +17,7 @@ Quickstart ...@@ -17,7 +17,7 @@ Quickstart
:header: HPO Quickstart with PyTorch :header: HPO Quickstart with PyTorch
:description: Use HPO to tune a PyTorch FashionMNIST model :description: Use HPO to tune a PyTorch FashionMNIST model
:link: tutorials/hpo_quickstart_pytorch/cp_global_quickstart_hpo.html :link: tutorials/hpo_quickstart_pytorch/cp_global_quickstart_hpo.html
:image: ../img/thumbnails/hpo-tensorflow.svg :image: ../img/thumbnails/hpo-pytorch.svg
:background: purple :background: purple
.. cardlinkitem:: .. cardlinkitem::
......
...@@ -242,9 +242,9 @@ AlgorithmConfig ...@@ -242,9 +242,9 @@ AlgorithmConfig
For customized algorithms, there are two ways to describe them: For customized algorithms, there are two ways to describe them:
1. `Register the algorithm <../Tutorial/InstallCustomizedAlgos.rst>`__ to use it like built-in. (preferred) 1. :doc:`Register the algorithm </hpo/custom_algorithm_installation>` to use it like built-in. (preferred)
2. Specify code directory and class name directly. 2. Specify code directory and class name directly.
.. list-table:: .. list-table::
:widths: 10 10 80 :widths: 10 10 80
...@@ -616,9 +616,9 @@ Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `Am ...@@ -616,9 +616,9 @@ Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `Am
SharedStorageConfig SharedStorageConfig
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
Detailed usage can be found `here <../Tutorial/HowToUseSharedStorage.rst>`__. Detailed usage can be found :doc:`here </experiment/shared_storage>`.
nfsConfig NfsConfig
--------- ---------
.. list-table:: .. list-table::
...@@ -657,7 +657,7 @@ nfsConfig ...@@ -657,7 +657,7 @@ nfsConfig
- ``str`` - ``str``
- Exported directory of NFS server, detailed `here <https://www.ibm.com/docs/en/aix/7.2?topic=system-nfs-exporting-mounting>`_. - Exported directory of NFS server, detailed `here <https://www.ibm.com/docs/en/aix/7.2?topic=system-nfs-exporting-mounting>`_.
azureBlobConfig AzureBlobConfig
--------------- ---------------
.. list-table:: .. list-table::
......
...@@ -14,43 +14,73 @@ Trial APIs ...@@ -14,43 +14,73 @@ Trial APIs
Tuners Tuners
------ ------
Batch Tuner
^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.batch_tuner.BatchTuner .. autoclass:: nni.algorithms.hpo.batch_tuner.BatchTuner
:members:
BOHB Tuner
^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.bohb_advisor.BOHB .. autoclass:: nni.algorithms.hpo.bohb_advisor.BOHB
:members:
DNGO Tuner
^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.dngo_tuner.DNGOTuner .. autoclass:: nni.algorithms.hpo.dngo_tuner.DNGOTuner
:members:
Evolution Tuner
^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.evolution_tuner.EvolutionTuner .. autoclass:: nni.algorithms.hpo.evolution_tuner.EvolutionTuner
:members:
GP Tuner
^^^^^^^^
.. autoclass:: nni.algorithms.hpo.gp_tuner.GPTuner .. autoclass:: nni.algorithms.hpo.gp_tuner.GPTuner
:members:
Grid Search Tuner
^^^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner .. autoclass:: nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner
:members:
Hyperband Tuner
^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.hyperband_advisor.Hyperband .. autoclass:: nni.algorithms.hpo.hyperband_advisor.Hyperband
:members:
Hyperopt Tuner
^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner .. autoclass:: nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner
:members:
Metis Tuner
^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.metis_tuner.MetisTuner .. autoclass:: nni.algorithms.hpo.metis_tuner.MetisTuner
:members:
PBT Tuner
^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.pbt_tuner.PBTTuner .. autoclass:: nni.algorithms.hpo.pbt_tuner.PBTTuner
:members:
PPO Tuner
^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.ppo_tuner.PPOTuner .. autoclass:: nni.algorithms.hpo.ppo_tuner.PPOTuner
:members:
Random Tuner
^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.random_tuner.RandomTuner .. autoclass:: nni.algorithms.hpo.random_tuner.RandomTuner
:members:
SMAC Tuner
^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.smac_tuner.SMACTuner .. autoclass:: nni.algorithms.hpo.smac_tuner.SMACTuner
:members:
TPE Tuner
^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeTuner .. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeTuner
:members:
.. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeArguments .. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeArguments
Assessors Assessors
--------- ---------
Curve Fitting Assessor
^^^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor .. autoclass:: nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor
:members:
Median Stop Assessor
^^^^^^^^^^^^^^^^^^^^
.. autoclass:: nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor .. autoclass:: nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor
:members:
Customization Customization
------------- -------------
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment