Unverified Commit a911b856 authored by Yuge Zhang's avatar Yuge Zhang Committed by GitHub
Browse files

Resolve conflicts for #4760 (#4762)

parent 14d2966b
......@@ -28,8 +28,8 @@ classification tasks, the metric "auc" and "logloss" were used for evaluation, w
After the script finishes, the final scores of each tuner are summarized in the file ``results[time]/reports/performances.txt``.
Since the file is large, we only show the following screenshot and summarize other important statistics instead.
.. image:: ../img/hpo_benchmark/performances.png
:target: ../img/hpo_benchmark/performances.png
.. image:: ../../img/hpo_benchmark/performances.png
:target: ../../img/hpo_benchmark/performances.png
:alt:
When the results are parsed, the tuners are also ranked based on their final performance. The following three tables show
......@@ -154,52 +154,52 @@ To view the same data in another way, for each tuner, we present the average ran
Besides these reports, our script also generates two graphs for each fold of each task: one graph presents the best score received by each tuner until trial x, and another graph shows the score that each tuner receives in trial x. These two graphs can give some information regarding how the tuners are "converging" to their final solution. We found that for "nnismall", tuners on the random forest model with search space defined in ``/examples/trials/benchmarking/automlbenchmark/nni/extensions/NNI/architectures/run_random_forest.py`` generally converge to the final solution after 40 to 60 trials. As there are too much graphs to incldue in a single report (96 graphs in total), we only present 10 graphs here.
.. image:: ../img/hpo_benchmark/car_fold1_1.jpg
:target: ../img/hpo_benchmark/car_fold1_1.jpg
.. image:: ../../img/hpo_benchmark/car_fold1_1.jpg
:target: ../../img/hpo_benchmark/car_fold1_1.jpg
:alt:
.. image:: ../img/hpo_benchmark/car_fold1_2.jpg
:target: ../img/hpo_benchmark/car_fold1_2.jpg
.. image:: ../../img/hpo_benchmark/car_fold1_2.jpg
:target: ../../img/hpo_benchmark/car_fold1_2.jpg
:alt:
The previous two graphs are generated for fold 1 of the task "car". In the first graph, we observe that most tuners find a relatively good solution within 40 trials. In this experiment, among all tuners, the DNGOTuner converges fastest to the best solution (within 10 trials). Its best score improved for three times in the entire experiment. In the second graph, we observe that most tuners have their score flucturate between 0.8 and 1 throughout the experiment. However, it seems that the Anneal tuner (green line) is more unstable (having more fluctuations) while the GPTuner has a more stable pattern. This may be interpreted as the Anneal tuner explores more aggressively than the GPTuner and thus its scores for different trials vary a lot. Regardless, although this pattern can to some extent hint a tuner's position on the explore-exploit tradeoff, it is not a comprehensive evaluation of a tuner's effectiveness.
.. image:: ../img/hpo_benchmark/christine_fold0_1.jpg
:target: ../img/hpo_benchmark/christine_fold0_1.jpg
.. image:: ../../img/hpo_benchmark/christine_fold0_1.jpg
:target: ../../img/hpo_benchmark/christine_fold0_1.jpg
:alt:
.. image:: ../img/hpo_benchmark/christine_fold0_2.jpg
:target: ../img/hpo_benchmark/christine_fold0_2.jpg
.. image:: ../../img/hpo_benchmark/christine_fold0_2.jpg
:target: ../../img/hpo_benchmark/christine_fold0_2.jpg
:alt:
.. image:: ../img/hpo_benchmark/cnae-9_fold0_1.jpg
:target: ../img/hpo_benchmark/cnae-9_fold0_1.jpg
.. image:: ../../img/hpo_benchmark/cnae-9_fold0_1.jpg
:target: ../../img/hpo_benchmark/cnae-9_fold0_1.jpg
:alt:
.. image:: ../img/hpo_benchmark/cnae-9_fold0_2.jpg
:target: ../img/hpo_benchmark/cnae-9_fold0_2.jpg
.. image:: ../../img/hpo_benchmark/cnae-9_fold0_2.jpg
:target: ../../img/hpo_benchmark/cnae-9_fold0_2.jpg
:alt:
.. image:: ../img/hpo_benchmark/credit-g_fold1_1.jpg
:target: ../img/hpo_benchmark/credit-g_fold1_1.jpg
.. image:: ../../img/hpo_benchmark/credit-g_fold1_1.jpg
:target: ../../img/hpo_benchmark/credit-g_fold1_1.jpg
:alt:
.. image:: ../img/hpo_benchmark/credit-g_fold1_2.jpg
:target: ../img/hpo_benchmark/credit-g_fold1_2.jpg
.. image:: ../../img/hpo_benchmark/credit-g_fold1_2.jpg
:target: ../../img/hpo_benchmark/credit-g_fold1_2.jpg
:alt:
.. image:: ../img/hpo_benchmark/titanic_2_fold1_1.jpg
:target: ../img/hpo_benchmark/titanic_2_fold1_1.jpg
.. image:: ../../img/hpo_benchmark/titanic_2_fold1_1.jpg
:target: ../../img/hpo_benchmark/titanic_2_fold1_1.jpg
:alt:
.. image:: ../img/hpo_benchmark/titanic_2_fold1_2.jpg
:target: ../img/hpo_benchmark/titanic_2_fold1_2.jpg
.. image:: ../../img/hpo_benchmark/titanic_2_fold1_2.jpg
:target: ../../img/hpo_benchmark/titanic_2_fold1_2.jpg
:alt:
:orphan:
NNI Annotation
==============
......@@ -32,7 +34,7 @@ In NNI, there are mainly four types of annotation:
**Arguments**
* **sampling_algo**\ : Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an ``nni.`` identification and a search space type specified in `SearchSpaceSpec <SearchSpaceSpec.rst>`__ such as ``choice`` or ``uniform``.
* **sampling_algo**\ : Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an ``nni.`` identification and a search space type specified in :doc:`SearchSpaceSpec <search_space>` such as ``choice`` or ``uniform``.
* **name**\ : The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.
There are 10 types to express your search space as follows:
......@@ -91,11 +93,11 @@ An example here is:
``'''@nni.report_intermediate_result(metrics)'''``
``@nni.report_intermediate_result`` is used to report intermediate result, whose usage is the same as ``nni.report_intermediate_result`` in the doc of `Write a trial run on NNI <../TrialExample/Trials.rst>`__
``@nni.report_intermediate_result`` is used to report intermediate result, whose usage is the same as :func:`nni.report_intermediate_result`.
4. Annotate final result
^^^^^^^^^^^^^^^^^^^^^^^^
``'''@nni.report_final_result(metrics)'''``
``@nni.report_final_result`` is used to report the final result of the current trial, whose usage is the same as ``nni.report_final_result`` in the doc of `Write a trial run on NNI <../TrialExample/Trials.rst>`__
``@nni.report_final_result`` is used to report the final result of the current trial, whose usage is the same as :func:`nni.report_final_result`.
Hyperparameter Optimization Overview
====================================
Auto hyperparameter optimization (HPO), or auto tuning, is one of the key features of NNI.
Introduction to HPO
-------------------
In machine learning, a hyperparameter is a parameter whose value is used to control learning process,
and HPO is the problem of choosing a set of optimal hyperparameters for a learning algorithm.
(`From <https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)>`__
`Wikipedia <https://en.wikipedia.org/wiki/Hyperparameter_optimization>`__)
Following code snippet demonstrates a naive HPO process:
.. code-block:: python
best_hyperparameters = None
best_accuracy = 0
for learning_rate in [0.1, 0.01, 0.001, 0.0001]:
for momentum in [i / 10 for i in range(10)]:
for activation_type in ['relu', 'tanh', 'sigmoid']:
model = build_model(activation_type)
train_model(model, learning_rate, momentum)
accuracy = evaluate_model(model)
if accuracy > best_accuracy:
best_accuracy = accuracy
best_hyperparameters = (learning_rate, momentum, activation_type)
print('Best hyperparameters:', best_hyperparameters)
You may have noticed, the example will train 4×10×3=120 models in total.
Since it consumes so much computing resources, you may want to:
1. :ref:`Find the best hyperparameter set with less iterations. <hpo-overview-tuners>`
2. :ref:`Train the models on distributed platforms. <hpo-overview-platforms>`
3. :ref:`Have a portal to monitor and control the process. <hpo-overview-portal>`
NNI will do them for you.
Key Features of NNI HPO
-----------------------
.. _hpo-overview-tuners:
Tuning Algorithms
^^^^^^^^^^^^^^^^^
NNI provides *tuners* to speed up the process of finding best hyperparameter set.
A tuner, or a tuning algorithm, decides the order in which hyperparameter sets are evaluated.
Based on the results of historical hyperparameter sets, an efficient tuner can predict where the best hyperparameters locates around,
and finds them in much fewer attempts.
The naive example above evaluates all possible hyperparameter sets in constant order, ignoring the historical results.
This is the brute-force tuning algorithm called *grid search*.
NNI has out-of-the-box support for a variety of popular tuners.
It includes naive algorithms like random search and grid search, Bayesian-based algorithms like TPE and SMAC,
RL based algorithms like PPO, and much more.
Main article: :doc:`tuners`
.. _hpo-overview-platforms:
Training Platforms
^^^^^^^^^^^^^^^^^^
If you are not interested in distributed platforms, you can simply run NNI HPO with current computer,
just like any ordinary Python library.
And when you want to leverage more computing resources, NNI provides built-in integration for training platforms
from simple on-premise servers to scalable commercial clouds.
With NNI you can write one piece of model code, and concurrently evaluate hyperparameter sets on local machine, SSH servers,
Kubernetes-based clusters, AzureML service, and much more.
Main article: :doc:`/experiment/training_service/overview`
.. _hpo-overview-portal:
Web Portal
^^^^^^^^^^
NNI provides a web portal to monitor training progress, to visualize hyperparameter performance,
to manually customize hyperparameters, and to manage multiple HPO experiments.
Main article: :doc:`/experiment/web_portal/web_portal`
.. image:: ../../static/img/webui.gif
:width: 100%
Tutorials
---------
To start using NNI HPO, choose the quickstart tutorial of your favorite framework:
* :doc:`PyTorch tutorial </tutorials/hpo_quickstart_pytorch/main>`
* :doc:`TensorFlow tutorial </tutorials/hpo_quickstart_tensorflow/main>`
Extra Features
--------------
After you are familiar with basic usage, you can explore more HPO features:
* :doc:`Use command line tool to create and manage experiments (nnictl) </reference/nnictl>`
* :doc:`nnictl example </tutorials/hpo_nnictl/nnictl>`
* :doc:`Early stop non-optimal models (assessor) <assessors>`
* :doc:`TensorBoard integration </experiment/web_portal/tensorboard>`
* :doc:`Implement your own algorithm <custom_algorithm>`
* :doc:`Benchmark tuners <hpo_benchmark>`
.. c74f6d072f5f8fa93eadd214bba992b4
超参调优
========
自动超参调优(hyperparameter optimization, HPO)是 NNI 的主要功能之一。
超参调优简介
------------
在机器学习中,用来控制学习过程的参数被称为“超参数”或“超参”,而为一种机器学习算法选择最优超参组合的问题被称为“超参调优”。
以下代码片段演示了一次朴素的超参调优:
.. code-block:: python
best_hyperparameters = None
best_accuracy = 0
for learning_rate in [0.1, 0.01, 0.001, 0.0001]:
for momentum in [i / 10 for i in range(10)]:
for activation_type in ['relu', 'tanh', 'sigmoid']:
model = build_model(activation_type)
train_model(model, learning_rate, momentum)
accuracy = evaluate_model(model)
if accuracy > best_accuracy:
best_accuracy = accuracy
best_hyperparameters = (learning_rate, momentum, activation_type)
print('最优超参:', best_hyperparameters)
可以看到,这段超参调优代码总计训练4×10×3=120个模型,要消耗大量的计算资源,因此您可能会有以下需求:
1. :ref:`通过较少的尝试次数找到最优超参组合 <zh-hpo-overview-tuners>`
2. :ref:`利用分布式平台进行训练 <zh-hpo-overview-platforms>`
3. :ref:`使用网页控制台来监控调参过程 <zh-hpo-overview-portal>`
NNI 可以满足您的这些需求。
NNI 超参调优的主要功能
----------------------
.. _zh-hpo-overview-tuners:
调优算法
^^^^^^^^
NNI 通过调优算法来更快地找到最优超参组合,这些算法被称为“tuner”(调参器)。
调优算法会决定需要运行、评估哪些超参组合,以及应该以何种顺序评估超参组合。
高效的算法可以通过已评估超参组合的结果去预测最优超参的取值,从而减少找到最优超参所需的评估次数。
开头的示例以固定顺序评估所有可能的超参组合,无视了超参的评估结果,这种朴素方法被称为“grid search”(网格搜索)。
NNI 内建了很多流行的调优算法,包括朴素算法如随机搜索、网格搜索,贝叶斯优化类算法如 TPESMAC,强化学习算法如 PPO 等等。
完整内容: :doc:`tuners`
.. _zh-hpo-overview-platforms:
训练平台
^^^^^^^^
如果您不准备使用分布式训练平台,您可以像使用普通 Python 函数库一样,在自己的电脑上直接运行 NNI 超参调优。
如果想利用更多计算资源加速调优过程,您也可以使用 NNI 内建的训练平台集成,从简单的 SSH 服务器到可扩容的 Kubernetes 集群 NNI 都提供支持。
完整内容: :doc:`/experiment/training_service/overview`
.. _zh-hpo-overview-portal:
网页控制台
^^^^^^^^^^
您可以使用 NNI 的网页控制台来监控超参调优实验,它支持实时显示实验进度、对超参性能进行可视化、人工修改超参数值、同时管理多个实验等诸多功能。
完整内容: :doc:`/experiment/web_portal/web_portal`
.. image:: ../../static/img/webui.gif
:width: 100%
教程
----
我们提供了以下教程帮助您上手 NNI 超参调优,您可以选择最熟悉的机器学习框架:
* :doc:`使用PyTorch的超参调优教程 </tutorials/hpo_quickstart_pytorch/main>`
* :doc:`使用TensorFlow的超参调优教程(英文) </tutorials/hpo_quickstart_tensorflow/main>`
更多功能
--------
在掌握了 NNI 超参调优的基础用法之后,您可以尝试以下更多功能:
* :doc:`Use command line tool to create and manage experiments (nnictl) </reference/nnictl>`
* :doc:`nnictl example </tutorials/hpo_nnictl/nnictl>`
* :doc:`Early stop non-optimal models (assessor) <assessors>`
* :doc:`TensorBoard integration </experiment/web_portal/tensorboard>`
* :doc:`Implement your own algorithm <custom_algorithm>`
* :doc:`Benchmark tuners <hpo_benchmark>`
Quickstart
==========
.. toctree::
PyTorch </tutorials/hpo_quickstart_pytorch/main>
TensorFlow </tutorials/hpo_quickstart_tensorflow/main>
Search Space
============
Overview
--------
In NNI, tuner will sample hyperparameters according to the search space.
To define a search space, users should define the name of the variable, the type of sampling strategy and its parameters.
* An example of a search space definition in JSON format is as follow:
.. code-block:: json
{
"dropout_rate": {"_type": "uniform", "_value": [0.1, 0.5]},
"conv_size": {"_type": "choice", "_value": [2, 3, 5, 7]},
"hidden_size": {"_type": "choice", "_value": [124, 512, 1024]},
"batch_size": {"_type": "choice", "_value": [50, 250, 500]},
"learning_rate": {"_type": "uniform", "_value": [0.0001, 0.1]}
}
Take the first line as an example.
``dropout_rate`` is defined as a variable whose prior distribution is a uniform distribution with a range from ``0.1`` to ``0.5``.
.. attention::
The available sampling strategies within a search space depend on the tuner you want to use.
We list the supported types for each built-in tuner :ref:`below <hpo-space-support>`.
For a customized tuner, you don't have to follow our convention and you will have the flexibility to define any type you want.
Types
-----
All types of sampling strategies and their parameter are listed here:
choice
^^^^^^
.. code-block:: python
{"_type": "choice", "_value": options}
* The variable's value is one of the options. Here ``options`` should be a list of **numbers** or a list of **strings**. Using arbitrary objects as members of this list (like sublists, a mixture of numbers and strings, or null values) should work in most cases, but may trigger undefined behaviors.
* ``options`` can also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space can be seen as conditional variables. Here is an simple :githublink:`example of nested search space definition <examples/trials/mnist-nested-search-space/search_space.json>`. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a ``_name`` key in this dict, which helps you to identify which element is chosen. Accordingly, here is a :githublink:`sample <examples/trials/mnist-nested-search-space/sample.json>` which users can get from nni with nested search space definition. See the table below for the tuners which support nested search spaces.
randint
^^^^^^^
.. code-block:: python
{"_type": "randint", "_value": [lower, upper]}
* Choosing a random integer between ``lower`` (inclusive) and ``upper`` (exclusive).
* Note: Different tuners may interpret ``randint`` differently. Some (e.g., TPE, GridSearch) treat integers from lower
to upper as unordered ones, while others respect the ordering (e.g., SMAC). If you want all the tuners to respect
the ordering, please use ``quniform`` with ``q=1``.
uniform
^^^^^^^
.. code-block:: python
{"_type": "uniform", "_value": [low, high]}
* The variable value is uniformly sampled between low and high.
* When optimizing, this variable is constrained to a two-sided interval.
quniform
^^^^^^^^
.. code-block:: python
{"_type": "quniform", "_value": [low, high, q]}
* The variable value is determined using ``clip(round(uniform(low, high) / q) * q, low, high)``\ , where the clip operation is used to constrain the generated value within the bounds. For example, for ``_value`` specified as [0, 10, 2.5], possible values are [0, 2.5, 5.0, 7.5, 10.0]; For ``_value`` specified as [2, 10, 5], possible values are [2, 5, 10].
* Suitable for a discrete value with respect to which the objective is still somewhat "smooth", but which should be bounded both above and below. If you want to uniformly choose an integer from a range [low, high], you can write ``_value`` like this: ``[low, high, 1]``.
loguniform
^^^^^^^^^^
.. code-block:: python
{"_type": "loguniform", "_value": [low, high]}
* The variable value is drawn from a range [low, high] according to a loguniform distribution like exp(uniform(log(low), log(high))), so that the logarithm of the return value is uniformly distributed.
* When optimizing, this variable is constrained to be positive.
qloguniform
^^^^^^^^^^^
.. code-block:: python
{"_type": "qloguniform", "_value": [low, high, q]}
* The variable value is determined using ``clip(round(loguniform(low, high) / q) * q, low, high)``\ , where the clip operation is used to constrain the generated value within the bounds.
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.
normal
^^^^^^
.. code-block:: python
{"_type": "normal", "_value": [mu, sigma]}
* The variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.
qnormal
^^^^^^^
.. code-block:: python
{"_type": "qnormal", "_value": [mu, sigma, q]}
* The variable value is determined using ``round(normal(mu, sigma) / q) * q``
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.
lognormal
^^^^^^^^^
.. code-block:: python
{"_type": "lognormal", "_value": [mu, sigma]}
* The variable value is drawn according to ``exp(normal(mu, sigma))`` so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.
qlognormal
^^^^^^^^^^
.. code-block:: python
{"_type": "qlognormal", "_value": [mu, sigma, q]}
* The variable value is determined using ``round(exp(normal(mu, sigma)) / q) * q``
* Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.
.. _hpo-space-support:
Search Space Types Supported by Each Tuner
------------------------------------------
.. list-table::
:header-rows: 1
:widths: auto
* -
- choice
- choice(nested)
- randint
- uniform
- quniform
- loguniform
- qloguniform
- normal
- qnormal
- lognormal
- qlognormal
* - :class:`TPE <nni.algorithms.hpo.tpe_tuner.TpeTuner>`
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`Random <nni.algorithms.hpo.random_tuner.RandomTuner>`
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`Grid Search <nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>`
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`Anneal <nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>`
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`Evolution <nni.algorithms.hpo.evolution_tuner.EvolutionTuner>`
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`SMAC <nni.algorithms.hpo.smac_tuner.SMACTuner>`
- ✓
-
- ✓
- ✓
- ✓
- ✓
-
-
-
-
-
* - :class:`Batch <nni.algorithms.hpo.batch_tuner.BatchTuner>`
- ✓
-
-
-
-
-
-
-
-
-
-
* - :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`
- ✓
-
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`Metis <nni.algorithms.hpo.metis_tuner.MetisTuner>`
- ✓
-
- ✓
- ✓
- ✓
-
-
-
-
-
-
* - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>`
- ✓
-
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`GP <nni.algorithms.hpo.gp_tuner.GPTuner>`
- ✓
-
- ✓
- ✓
- ✓
- ✓
- ✓
-
-
-
-
* - :class:`PBT <nni.algorithms.hpo.pbt_tuner.PBTTuner>`
- ✓
-
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
- ✓
* - :class:`DNGO <nni.algorithms.hpo.dngo_tuner.DNGOTuner>`
- ✓
-
- ✓
- ✓
- ✓
- ✓
- ✓
-
-
-
-
Known Limitations:
* GP Tuner, Metis Tuner and DNGO tuner support only **numerical values** in search space
(``choice`` type values can be no-numerical with other tuners, e.g. string values).
Both GP Tuner and Metis Tuner use Gaussian Process Regressor(GPR).
GPR make predictions based on a kernel function and the 'distance' between different points,
it's hard to get the true distance between no-numerical values.
* Note that for nested search space:
* Only TPE/Random/Grid Search/Anneal/Evolution tuners support nested search space.
Hyperparameter Optimization
===========================
.. toctree::
:hidden:
Overview <overview>
quickstart
Search Space <search_space>
Tuners <tuners>
Assessors <assessors>
advanced_usage
.. 21e9c3e0f6b182cf42a99a7f6c4ecf98
超参调优
========
.. toctree::
:hidden:
概述 <overview>
教程 <quickstart>
搜索空间 <search_space>
Tuners <tuners>
Assessors <assessors>
高级用法 <advanced_usage>
Tuner: Tuning Algorithms
========================
The tuner decides which hyperparameter sets will be evaluated. It is a most important part of NNI HPO.
A tuner works like following pseudocode:
.. code-block:: python
space = get_search_space()
history = []
while not experiment_end:
hp = suggest_hyperparameter_set(space, history)
result = run_trial(hp)
history.append((hp, result))
NNI has out-of-the-box support for many popular tuning algorithms.
They should be sufficient to cover most typical machine learning scenarios.
However, if you have a very specific demand, or if you have designed an algorithm yourself,
you can also implement your own tuner: :doc:`custom_algorithm`
Common Usage
------------
All built-in tuners have similar usage.
To use a built-in tuner, you need to specify its name and arguments in experiment config,
and provides a standard :doc:`search_space`.
Some tuners, like SMAC and DNGO, have extra dependencies that need to be installed separately.
Please check each tuner's reference page for what arguments it supports and whether it needs extra dependencies.
As a general example, random tuner can be configured as follow:
.. code-block:: python
config.search_space = {
'x': {'_type': 'uniform', '_value': [0, 1]},
'y': {'_type': 'choice', '_value': ['a', 'b', 'c']}
}
config.tuner.name = 'Random'
config.tuner.class_args = {'seed': 0}
Built-in Tuners
---------------
.. list-table::
:header-rows: 1
:widths: auto
* - Tuner
- Category
- Brief Introduction
* - :class:`TPE <nni.algorithms.hpo.tpe_tuner.TpeTuner>`
- Bayesian
- Tree-structured Parzen Estimator, a classic Bayesian optimization algorithm.
(`paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__)
TPE is a lightweight tuner that has no extra dependency and supports all search space types.
Good to start with.
The drawback is that TPE cannot discover relationship between different hyperparameters.
* - :class:`Random <nni.algorithms.hpo.random_tuner.RandomTuner>`
- Basic
- Naive random search, the baseline. It supports all search space types.
* - :class:`Grid Search <nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>`
- Basic
- Divides search space into evenly spaced grid, and performs brute-force traverse. Another baseline.
It supports all search space types.
Recommended when the search space is small, and when you want to find the strictly optimal hyperparameters.
* - :class:`Anneal <nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>`
- Heuristic
- This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive.
* - :class:`Evolution <nni.algorithms.hpo.evolution_tuner.EvolutionTuner>`
- Heuristic
- Naive Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to work, but it's very simple and easy to expand new features. `Reference paper <https://arxiv.org/pdf/1703.01041.pdf>`__
* - :class:`SMAC <nni.algorithms.hpo.smac_tuner.SMACTuner>`
- Bayesian
- SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo.
Notice, SMAC needs to be installed by ``pip install nni[SMAC]`` command. `Reference Paper, <https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf>`__ `GitHub Repo <https://github.com/automl/SMAC3>`__
* - :class:`Batch <nni.algorithms.hpo.batch_tuner.BatchTuner>`
- Basic
- Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec.
* - :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`
- Heuristic
- Hyperband tries to use limited resources to explore as many configurations as possible and returns the most promising ones as a final result. The basic idea is to generate many configurations and run them for a small number of trials. The half least-promising configurations are thrown out, the remaining are further trained along with a selection of new configurations. The size of these populations is sensitive to resource constraints (e.g. allotted search time). `Reference Paper <https://arxiv.org/pdf/1603.06560.pdf>`__
* - :class:`Metis <nni.algorithms.hpo.metis_tuner.MetisTuner>`
- Bayesian
- Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. `Reference Paper <https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/>`__
* - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>`
- Bayesian
- BOHB is a follow-up work to Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Bayesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. `Reference Paper <https://arxiv.org/abs/1807.01774>`__
* - :class:`GP <nni.algorithms.hpo.gp_tuner.GPTuner>`
- Bayesian
- Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. `Reference Paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__, `Github Repo <https://github.com/fmfn/BayesianOptimization>`__
* - :class:`PBT <nni.algorithms.hpo.pbt_tuner.PBTTuner>`
- Heuristic
- PBT Tuner is a simple asynchronous optimization algorithm which effectively utilizes a fixed computational budget to jointly optimize a population of models and their hyperparameters to maximize performance. `Reference Paper <https://arxiv.org/abs/1711.09846v1>`__
* - :class:`DNGO <nni.algorithms.hpo.dngo_tuner.DNGOTuner>`
- Bayesian
- Use of neural networks as an alternative to GPs to model distributions over functions in bayesian optimization.
Comparison
----------
These articles have compared built-in tuners' performance on some different tasks:
:doc:`hpo_benchmark_stats`
:doc:`/sharings/hpo_comparison`
Advanced Features
=================
.. toctree::
:maxdepth: 2
Write a New Tuner <Tuner/CustomizeTuner>
Write a New Assessor <Assessor/CustomizeAssessor>
Write a New Advisor <Tuner/CustomizeAdvisor>
Write a New Training Service <TrainingService/HowToImplementTrainingService>
Install Customized Algorithms as Builtin Tuners/Assessors/Advisors <Tutorial/InstallCustomizedAlgos>
.. 43bb394b1e25458a948c134058ec68ac
高级功能
=================
.. toctree::
:maxdepth: 2
编写新的 Tuner <Tuner/CustomizeTuner>
编写新的 Assessor <Assessor/CustomizeAssessor>
编写新的 Advisor <Tuner/CustomizeAdvisor>
编写新的训练平台 <TrainingService/HowToImplementTrainingService>
安装自定义的 Tuners/Assessors/Advisors <Tutorial/InstallCustomizedAlgos>
#############################
Auto (Hyper-parameter) Tuning
#############################
Auto tuning is one of the key features provided by NNI; a main application scenario being
hyper-parameter tuning. Tuning specifically applies to trial code. We provide a lot of popular
auto tuning algorithms (called Tuner), and some early stop algorithms (called Assessor).
NNI supports running trials on various training platforms, for example, on a local machine,
on several servers in a distributed manner, or on platforms such as OpenPAI, Kubernetes, etc.
Other key features of NNI, such as model compression, feature engineering, can also be further
enhanced by auto tuning, which we'll described when introducing those features.
NNI has high extensibility, advanced users can customize their own Tuner, Assessor, and Training Service
according to their needs.
.. toctree::
:maxdepth: 2
Write Trial <TrialExample/Trials>
Tuners <builtin_tuner>
Assessors <builtin_assessor>
Training Platform <training_services>
Examples <examples>
WebUI <Tutorial/WebUI>
How to Debug <Tutorial/HowToDebug>
Advanced <hpo_advanced>
HPO Benchmarks <hpo_benchmark>
.. 6ed30d3a87dbc4c1c4650cf56f074045
##############
自动超参数调优
##############
自动调优是 NNI 的主要功能之一。它的工作模式是
反复运行 trial 代码,每次向其提供不同的超参组合,从而对 trial 的运行结果进行调优。
NNI 提供了很多流行的自动调优算法(称为 Tuner)和一些提前终止算法(称为 Assessor)。
NNI 支持在多种训练平台上运行 trial,包括本机、
远程服务器、Azure Machine Learning、基于 Kubernetes 的集群(如 OpenPAI、Kubeflow)等等。
其他的功能,例如模型压缩、特征工程,也可以
使用自动调优。这些我们在介绍相应功能的时候会具体介绍。
NNI 具有高扩展性,
用户可以根据需求实现自己的 Tuner 算法和训练平台。
.. toctree::
:maxdepth: 2
实现 Trial <./TrialExample/Trials>
Tuners <builtin_tuner>
Assessors <builtin_assessor>
训练平台 <training_services>
示例 <examples>
Web 界面 <Tutorial/WebUI>
如何调试 <Tutorial/HowToDebug>
高级功能 <hpo_advanced>
Tuner 基准测试 <hpo_benchmark>
.. modified from index.html
.. replace \{\{ pathto\('(.*)'\) \}\} -> $1.html
NNI Documentation
=================
###########################
Neural Network Intelligence
###########################
.. toctree::
:maxdepth: 2
:caption: Get Started
:hidden:
installation
quickstart
.. toctree::
:maxdepth: 2
:caption: Get Started
:hidden:
.. toctree::
:maxdepth: 2
:caption: User Guide
:hidden:
Installation <installation>
QuickStart <Tutorial/QuickStart>
Tutorials <tutorials>
hpo/toctree
nas/toctree
Model Compression <compression/toctree>
feature_engineering/toctree
experiment/toctree
.. toctree::
:maxdepth: 2
:caption: Advanced Materials
:hidden:
.. toctree::
:maxdepth: 2
:caption: References
:hidden:
Overview
Auto (Hyper-parameter) Tuning <hyperparameter_tune>
Neural Architecture Search <nas>
Model Compression <model_compression>
Feature Engineering <feature_engineering>
Python API <reference/python_api>
reference/experiment_config
reference/nnictl
.. toctree::
:maxdepth: 2
:caption: References
:hidden:
.. toctree::
:maxdepth: 2
:caption: Misc
:hidden:
References <reference>
examples
sharings/community_sharings
notes/research_publications
notes/build_from_source
notes/contributing
release
.. toctree::
:maxdepth: 2
:caption: Misc
:hidden:
**NNI (Neural Network Intelligence)** is a lightweight but powerful toolkit to help users **automate**:
Use Cases and Solutions <CommunitySharings/community_sharings>
Research and Publications <ResearchPublications>
FAQ <Tutorial/FAQ>
How to Contribute <contribution>
Change Log <Release>
* :doc:`Hyperparameter Optimization </hpo/overview>`
* :doc:`Neural Architecture Search </nas/overview>`
* :doc:`Model Compression </compression/overview>`
* :doc:`Feature Engineering </feature_engineering/overview>`
Get Started
-----------
To install the current release:
.. code-block:: bash
$ pip install nni
See the :doc:`installation guide </installation>` if you need additional help on installation.
Try your first NNI experiment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
$ nnictl hello
.. note:: You need to have `PyTorch <https://pytorch.org/>`_ (as well as `torchvision <https://pytorch.org/vision/stable/index.html>`_) installed to run this experiment.
To start your journey now, please follow the :doc:`absolute quickstart of NNI <quickstart>`!
Why choose NNI?
---------------
NNI makes AutoML techniques plug-and-play
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. raw:: html
<div class="codesnippet-card-container">
.. codesnippetcard::
:icon: ../img/thumbnails/hpo-small.svg
:title: Hyperparameter Tuning
:link: tutorials/hpo_quickstart_pytorch/main
.. code-block::
params = nni.get_next_parameter()
class Net(nn.Module):
...
model = Net()
optimizer = optim.SGD(model.parameters(),
params['lr'],
params['momentum'])
for epoch in range(10):
train(...)
accuracy = test(model)
nni.report_final_result(accuracy)
.. codesnippetcard::
:icon: ../img/thumbnails/pruning-small.svg
:title: Model Pruning
:link: tutorials/pruning_quick_start_mnist
.. code-block::
# define a config_list
config = [{
'sparsity': 0.8,
'op_types': ['Conv2d']
}]
# generate masks for simulated pruning
wrapped_model, masks = \
L1NormPruner(model, config). \
compress()
# apply the masks for real speedup
ModelSpeedup(unwrapped_model, input, masks). \
speedup_model()
.. codesnippetcard::
:icon: ../img/thumbnails/quantization-small.svg
:title: Quantization
:link: tutorials/quantization_quick_start_mnist
.. code-block::
# define a config_list
config = [{
'quant_types': ['input', 'weight'],
'quant_bits': {'input': 8, 'weight': 8},
'op_types': ['Conv2d']
}]
# in case quantizer needs a extra training
quantizer = QAT_Quantizer(model, config)
quantizer.compress()
# Training...
# export calibration config and
# generate TensorRT engine for real speedup
calibration_config = quantizer.export_model(
model_path, calibration_path)
engine = ModelSpeedupTensorRT(
model, input_shape, config=calib_config)
engine.compress()
.. codesnippetcard::
:icon: ../img/thumbnails/multi-trial-nas-small.svg
:title: Neural Architecture Search
:link: tutorials/hello_nas
.. code-block:: python
# define model space
class Model(nn.Module):
self.conv2 = nn.LayerChoice([
nn.Conv2d(32, 64, 3, 1),
DepthwiseSeparableConv(32, 64)
])
model_space = Model()
# search strategy + evaluator
strategy = RegularizedEvolution()
evaluator = FunctionalEvaluator(
train_eval_fn)
# run experiment
RetiariiExperiment(model_space,
evaluator, strategy).run()
.. codesnippetcard::
:icon: ../img/thumbnails/one-shot-nas-small.svg
:title: One-shot NAS
:link: nas/exploration_strategy
.. code-block::
# define model space
space = AnySearchSpace()
# get a darts trainer
trainer = DartsTrainer(space, loss, metrics)
trainer.fit()
# get final searched architecture
arch = trainer.export()
.. codesnippetcard::
:icon: ../img/thumbnails/feature-engineering-small.svg
:title: Feature Engineering
:link: feature_engineering/overview
.. code-block::
selector = GBDTSelector()
selector.fit(
X_train, y_train,
lgb_params=lgb_params,
eval_ratio=eval_ratio,
early_stopping_rounds=10,
importance_type='gain',
num_boost_round=1000)
# get selected features
features = selector.get_selected_features()
.. End of code snippet card
.. raw:: html
<div class="rowHeight">
<div class="chinese"><a href="https://nni.readthedocs.io/zh/stable/">简体中文</a></div>
<b>NNI (Neural Network Intelligence)</b> is a lightweight but powerful toolkit to
help users <b>automate</b>
<a href="FeatureEngineering/Overview.html">Feature Engineering</a>,
<a href="NAS/Overview.html">Neural Architecture Search</a>,
<a href="Tuner/BuiltinTuner.html">Hyperparameter Tuning</a> and
<a href="Compression/Overview.html">Model Compression</a>.
</div>
<p class="gap rowHeight">
The tool manages automated machine learning (AutoML) experiments,
<b>dispatches and runs</b>
experiments' trial jobs generated by tuning algorithms to search the best neural
architecture and/or hyper-parameters in
<b>different training environments</b> like
<a href="TrainingService/LocalMode.html">Local Machine</a>,
<a href="TrainingService/RemoteMachineMode.html">Remote Servers</a>,
<a href="TrainingService/PaiMode.html">OpenPAI</a>,
<a href="TrainingService/KubeflowMode.html">Kubeflow</a>,
<a href="TrainingService/FrameworkControllerMode.html">FrameworkController on K8S (AKS etc.)</a>,
<a href="TrainingService/DLTSMode.html">DLWorkspace (aka. DLTS)</a>,
<a href="TrainingService/AMLMode.html">AML (Azure Machine Learning)</a>,
<a href="TrainingService/AdaptDLMode.html">AdaptDL (aka. ADL)</a>, other cloud options and even <a href="TrainingService/HybridMode.html">Hybrid mode</a>.
</p>
<!-- Who should consider using NNI -->
<div>
<h2 class="title">Who should consider using NNI</h2>
<ul>
<li>Those who want to <b>try different AutoML algorithms</b> in their training code/model.</li>
<li>Those who want to run AutoML trial jobs <b>in different environments</b> to speed up search.</li>
<li class="rowHeight">Researchers and data scientists who want to easily <b>implement and experiement new AutoML
algorithms</b>
, may it be: hyperparameter tuning algorithm,
neural architect search algorithm or model compression algorithm.
</li>
<li>ML Platform owners who want to <b>support AutoML in their platform</b></li>
</ul>
</div>
<!-- what's new -->
<div>
<div class="inline gap">
<h2>What's NEW! </h2>
<img width="48" src="_static/img/release_icon.png">
</div>
<hr class="whatNew"/>
<ul>
<li><b>New release:</b> <a href='https://github.com/microsoft/nni/releases/tag/v2.6'>v2.6 is available. <i>- released on Jan-18-2022</i></a></li>
<li><b>New demo available:</b> <a href="https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw">Youtube entry</a> | <a href="https://space.bilibili.com/1649051673">Bilibili</a> 入口 <i>- last updated on May-26-2021</i></li>
<li><b>New webinar:</b> <a href="https://note.microsoft.com/MSR-Webinar-Retiarii-Registration-On-Demand.html">
Introducing Retiarii: A deep learning exploratory-training framework on NNI
</a> <i>- scheduled on June-24-2021</i>
</li>
<li><b>New community channel:</b> <a href="https://github.com/microsoft/nni/discussions">Discussions</a></li>
<li>
<div><b>New emoticons release:</b> <a href="nnSpider.html">nnSpider</a></div>
<img class="gap" src="_static/img/home.svg"></img>
</li>
</ul>
</div>
<!-- NNI capabilities in a glance -->
<div class="gap">
<h2 class="title">NNI capabilities in a glance</h2>
<p class="rowHeight">
NNI provides CommandLine Tool as well as an user friendly WebUI to manage training experiements.
With the extensible API, you can customize your own AutoML algorithms and training services.
To make it easy for new users, NNI also provides a set of build-in stat-of-the-art
AutoML algorithms and out of box support for popular training platforms.
</p>
<p class="rowHeight">
Within the following table, we summarized the current NNI capabilities,
we are gradually adding new capabilities and we'd love to have your contribution.
</p>
</div>
<p align="center">
<a href="#overview"><img src="_static/img/overview.svg" /></a>
</p>
<table class="main-table">
<tbody>
<tr align="center" valign="bottom" class="column">
<td></td>
<td class="framework">
<b>Frameworks & Libraries</b>
</td>
<td>
<b>Algorithms</b>
</td>
<td>
<b>Training Services</b>
</td>
</tr>
</tr>
<tr>
<td class="verticalMiddle"><b>Built-in</b></td>
<td>
<ul class="firstUl">
<li><b>Supported Frameworks</b></li>
<ul class="circle">
<li>PyTorch</li>
<li>Keras</li>
<li>TensorFlow</li>
<li>MXNet</li>
<li>Caffe2</li>
<a href="SupportedFramework_Library.html">More...</a><br />
</ul>
</ul>
<ul class="firstUl">
<li><b>Supported Libraries</b></li>
<ul class="circle">
<li>Scikit-learn</li>
<li>XGBoost</li>
<li>LightGBM</li>
<a href="SupportedFramework_Library.html">More...</a><br />
</ul>
</ul>
<ul class="firstUl">
<li><b>Examples</b></li>
<ul class="circle">
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-pytorch">MNIST-pytorch</li>
</a>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-tfv2">MNIST-tensorflow</li>
</a>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="TrialExample/GbdtExample.html">Auto-gbdt</a></li>
<li><a href="TrialExample/Cifar10Examples.html">Cifar10-pytorch</li></a>
<li><a href="TrialExample/SklearnExamples.html">Scikit-learn</a></li>
<li><a href="TrialExample/EfficientNet.html">EfficientNet</a></li>
<li><a href="TrialExample/OpEvoExamples.html">Kernel Tunning</li></a>
<a href="SupportedFramework_Library.html">More...</a><br />
</ul>
</ul>
</td>
<td align="left">
<a href="Tuner/BuiltinTuner.html">Hyperparameter Tuning</a>
<ul class="firstUl">
<div><b>Exhaustive search</b></div>
<ul class="circle">
<li><a href="Tuner/BuiltinTuner.html#Random">Random Search</a></li>
<li><a href="Tuner/BuiltinTuner.html#GridSearch">Grid Search</a></li>
<li><a href="Tuner/BuiltinTuner.html#Batch">Batch</a></li>
</ul>
<div><b>Heuristic search</b></div>
<ul class="circle">
<li><a href="Tuner/BuiltinTuner.html#Evolution">Naïve Evolution</a></li>
<li><a href="Tuner/BuiltinTuner.html#Anneal">Anneal</a></li>
<li><a href="Tuner/BuiltinTuner.html#Hyperband">Hyperband</a></li>
<li><a href="Tuner/BuiltinTuner.html#PBTTuner">PBT</a></li>
</ul>
<div><b>Bayesian optimization</b></div>
<ul class="circle">
<li><a href="Tuner/BuiltinTuner.html#BOHB">BOHB</a></li>
<li><a href="Tuner/BuiltinTuner.html#TPE">TPE</a></li>
<li><a href="Tuner/BuiltinTuner.html#SMAC">SMAC</a></li>
<li><a href="Tuner/BuiltinTuner.html#MetisTuner">Metis Tuner</a></li>
<li><a href="Tuner/BuiltinTuner.html#GPTuner">GP Tuner</a> </li>
<li><a href="Tuner/BuiltinTuner.html#DNGOTuner">DNGO Tuner</a></li>
</ul>
</ul>
<a href="NAS/Overview.html">Neural Architecture Search (Retiarii)</a>
<ul class="firstUl">
<ul class="circle">
<li><a href="NAS/ENAS.html">ENAS</a></li>
<li><a href="NAS/DARTS.html">DARTS</a></li>
<li><a href="NAS/SPOS.html">SPOS</a></li>
<li><a href="NAS/Proxylessnas.html">ProxylessNAS</a></li>
<li><a href="NAS/FBNet.html">FBNet</a></li>
<li><a href="NAS/ExplorationStrategies.html">Reinforcement Learning</a></li>
<li><a href="NAS/ExplorationStrategies.html">Regularized Evolution</a></li>
<li><a href="NAS/Overview.html">More...</a></li>
</ul>
</ul>
<a href="Compression/Overview.html">Model Compression</a>
<ul class="firstUl">
<div><b>Pruning</b></div>
<ul class="circle">
<li><a href="Compression/Pruner.html#agp-pruner">AGP Pruner</a></li>
<li><a href="Compression/Pruner.html#slim-pruner">Slim Pruner</a></li>
<li><a href="Compression/Pruner.html#fpgm-pruner">FPGM Pruner</a></li>
<li><a href="Compression/Pruner.html#netadapt-pruner">NetAdapt Pruner</a></li>
<li><a href="Compression/Pruner.html#simulatedannealing-pruner">SimulatedAnnealing Pruner</a></li>
<li><a href="Compression/Pruner.html#admm-pruner">ADMM Pruner</a></li>
<li><a href="Compression/Pruner.html#autocompress-pruner">AutoCompress Pruner</a></li>
<li><a href="Compression/Overview.html">More...</a></li>
</ul>
<div><b>Quantization</b></div>
<ul class="circle">
<li><a href="Compression/Quantizer.html#qat-quantize">QAT Quantizer</a></li>
<li><a href="Compression/Quantizer.html#dorefa-quantizer">DoReFa Quantizer</a></li>
<li><a href="Compression/Quantizer.html#bnn-quantizer">BNN Quantizer</a></li>
</ul>
</ul>
<a href="FeatureEngineering/Overview.html">Feature Engineering (Beta)</a>
<ul class="circle">
<li><a href="FeatureEngineering/GradientFeatureSelector.html">GradientFeatureSelector</a></li>
<li><a href="FeatureEngineering/GBDTSelector.html">GBDTSelector</a></li>
</ul>
<a href="Assessor/BuiltinAssessor.html">Early Stop Algorithms</a>
<ul class="circle">
<li><a href="Assessor/BuiltinAssessor.html#MedianStop">Median Stop</a></li>
<li><a href="Assessor/BuiltinAssessor.html#Curvefitting">Curve Fitting</a></li>
</ul>
</td>
<td>
<ul class="firstUl">
<li><a href="TrainingService/LocalMode.html">Local Machine</a></li>
<li><a href="TrainingService/RemoteMachineMode.html">Remote Servers</a></li>
<li><a href="TrainingService/HybridMode.html">Hybrid mode</a></li>
<li><a href="TrainingService/AMLMode.html">AML(Azure Machine Learning)</a></li>
<li><b>Kubernetes based services</b></li>
<ul>
<li><a href="TrainingService/PaiMode.html">OpenPAI</a></li>
<li><a href="TrainingService/KubeflowMode.html">Kubeflow</a></li>
<li><a href="TrainingService/FrameworkControllerMode.html">FrameworkController on K8S (AKS etc.)</a></li>
<li><a href="TrainingService/DLTSMode.html">DLWorkspace (aka. DLTS)</a></li>
<li><a href="TrainingService/AdaptDLMode.html">AdaptDL (aka. ADL)</a></li>
</ul>
</ul>
</td>
</tr>
<tr valign="top">
<td class="verticalMiddle"><b>References</b></td>
<td>
<ul class="firstUl">
<li><a href="Tutorial/HowToLaunchFromPython.html">Python API</a></li>
<li><a href="Tutorial/AnnotationSpec.html">NNI Annotation</a></li>
<li><a href="installation.html">Supported OS</a></li>
</ul>
</td>
<td>
<ul class="firstUl">
<li><a href="Tuner/CustomizeTuner.html">CustomizeTuner</a></li>
<li><a href="Assessor/CustomizeAssessor.html">CustomizeAssessor</a></li>
<li><a href="Tutorial/InstallCustomizedAlgos.html">Install Customized Algorithms as Builtin Tuners/Assessors/Advisors</a></li>
<li><a href="NAS/QuickStart.html">Define NAS Model Space</a></li>
<li><a href="NAS/ApiReference.html">NAS/Retiarii APIs</a></li>
</ul>
</td>
<td>
<ul class="firstUl">
<li><a href="TrainingService/Overview.html">Support TrainingService</a></li>
<li><a href="TrainingService/HowToImplementTrainingService.html">Implement TrainingService</a></li>
</ul>
</td>
</tr>
</tbody>
</table>
<!-- Installation -->
<div class="gap">
<h2 class="title">Installation</h2>
<div>
<h3 class="second-title">Install</h3>
<div class="gap2">
NNI supports and is tested on Ubuntu >= 16.04, macOS >= 10.14.1,
and Windows 10 >= 1809. Simply run the following <code>pip install</code>
in an environment that has <code>python 64-bit >= 3.6</code>.
</div>
<div class="command-intro">Linux or macOS</div>
<div class="command">python3 -m pip install --upgrade nni</div>
<div class="command-intro">Windows</div>
<div class="command">python -m pip install --upgrade nni</div>
<div class="command-intro">If you want to try latest code, please <a href="installation.html">install
NNI</a> from source code.
</div>
<div class="chinese">For detail system requirements of NNI, please refer to <a href="Tutorial/InstallationLinux.html">here</a>
for Linux & macOS, and <a href="Tutorial/InstallationWin.html">here</a> for Windows.</div>
</div>
<div>
<p>Note:</p>
<ul>
<li>If there is any privilege issue, add --user to install NNI in the user directory.</li>
<li class="rowHeight">Currently NNI on Windows supports local, remote and pai mode. Anaconda or Miniconda is highly
recommended to install <a href="Tutorial/InstallationWin.html">NNI on Windows</a>.</li>
<li>If there is any error like Segmentation fault, please refer to <a
href="installation.html">FAQ</a>. For FAQ on Windows, please refer
to <a href="Tutorial/InstallationWin.html">NNI on Windows</a>.</li>
</ul>
</div>
<div>
<h3 class="second-title gap">Verify installation</h3>
<div>
The following example is built on TensorFlow 1.x. Make sure <b>TensorFlow 1.x is used</b> when running
it.
</div>
<ul>
<li>
<div class="command-intro">Download the examples via clone the source code.</div>
<div class="command">git clone -b v2.6 https://github.com/Microsoft/nni.git</div>
</li>
<li>
<div>Run the MNIST example.</div>
<div class="command-intro">Linux or macOS</div>
<div class="command">nnictl create --config nni/examples/trials/mnist-pytorch/config.yml</div>
<div class="command-intro">Windows</div>
<div class="command">nnictl create --config nni\examples\trials\mnist-pytorch\config_windows.yml</div>
</li>
<li>
<div class="rowHeight">
Wait for the message INFO: Successfully started experiment! in the command line.
This message indicates that your experiment has been successfully started.
You can explore the experiment using the Web UI url.
</div>
<!-- Indentation affects style! -->
<pre class="main-code">
INFO: Starting restful server...
INFO: Successfully started Restful server!
INFO: Setting local config...
INFO: Successfully set local config!
INFO: Starting experiment...
INFO: Successfully started experiment!
-----------------------------------------------------------------------
The experiment id is egchD4qy
The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080
-----------------------------------------------------------------------
You can use these commands to get more information about the experiment
-----------------------------------------------------------------------
commands description
1. nnictl experiment show show the information of experiments
2. nnictl trial ls list all of trial jobs
3. nnictl top monitor the status of running experiments
4. nnictl log stderr show stderr log content
5. nnictl log stdout show stdout log content
6. nnictl stop stop an experiment
7. nnictl trial kill kill a trial job by id
8. nnictl --help get help information about nnictl
-----------------------------------------------------------------------
</pre>
</li>
<li class="rowHeight">
Open the Web UI url in your browser, you can view detail information of the experiment and
all the submitted trial jobs as shown below. <a href="Tutorial/WebUI.html">Here</a> are more Web UI
pages.
<img class="gap" src="_static/img/webui.gif" width="100%"/>
</div>
</li>
</ul>
</div>
<!-- Releases and Contributing -->
<div class="gap">
<h2 class="title">Releases and Contributing</h2>
<div>NNI has a monthly release cycle (major releases). Please let us know if you encounter a bug by filling an issue.</div>
<br/>
<div>We appreciate all contributions. If you are planning to contribute any bug-fixes, please do so without further discussions.</div>
<br/>
<div class="rowHeight">If you plan to contribute new features, new tuners, new training services, etc. please first open an issue or reuse an exisiting issue, and discuss the feature with us. We will discuss with you on the issue timely or set up conference calls if needed.</div>
<br/>
<div>To learn more about making a contribution to NNI, please refer to our <a href="contribution.html"">How-to contribution page</a>.</div>
<br/>
<div>We appreciate all contributions and thank all the contributors!</div>
<img class="gap" src="_static/img/contributors.png"></img>
</div>
<!-- feedback -->
<div class="gap">
<h2 class="title">Feedback</h2>
<ul>
<li><a href="https://github.com/microsoft/nni/issues/new/choose">File an issue</a> on GitHub.</li>
<li>Open or participate in a <a href="https://github.com/microsoft/nni/discussions">discussion</a>.</li>
<li>Discuss on the <a href="https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge">NNI Gitter</a> in NNI.</li>
</ul>
<div>
<div class="rowHeight">Join IM discussion groups:</div>
<table class="gap" border=1 style="border-collapse: collapse;">
<tbody>
<tr style="line-height: 30px;">
<th>Gitter</th>
<td></td>
<th>WeChat</th>
</tr>
<tr>
<td class="QR">
<img src="https://user-images.githubusercontent.com/39592018/80665738-e0574a80-8acc-11ea-91bc-0836dc4cbf89.png" alt="Gitter" />
</td>
<td width="80" align="center" class="or">OR</td>
<td class="QR">
<img src="https://github.com/scarlett2018/nniutil/raw/master/wechat.png" alt="NNI Wechat" />
</td>
</tr>
</tbody>
</table>
</div>
</div>
<!-- Test status -->
<div class="gap">
<h2 class="title">Test status</h2>
<h3>Essentials</h3>
<table class="pipeline">
<tr>
<th>Type</th>
<th>Status</th>
</tr>
<tr>
<td>Fast test</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=54&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/fast%20test?branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>Full linux</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=62&repoName=microsoft%2Fnni&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/full%20test%20-%20linux?repoName=microsoft%2Fnni&branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>Full windows</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=63&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/full%20test%20-%20windows?branchName=master"/>
</a>
</td>
</tr>
</table>
<h3 class="gap">Training services</h3>
<table class="pipeline">
<tr>
<th>Type</th>
<th>Status</th>
</th>
<tr>
<td>Remote - linux to linux</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=64&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20remote%20-%20linux%20to%20linux?branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>Remote - linux to windows</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=67&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20remote%20-%20linux%20to%20windows?branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>Remote - windows to linux</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=68&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20remote%20-%20windows%20to%20linux?branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>OpenPAI</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=65&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20openpai%20-%20linux?branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>Frameworkcontroller</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=70&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20frameworkcontroller?branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>Kubeflow</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=69&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20kubeflow?branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>Hybrid</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=79&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20hybrid?branchName=master"/>
</a>
</td>
</tr>
<tr>
<td>AzureML</td>
<td>
<a href="https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=78&branchName=master">
<img src="https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration%20test%20-%20aml?branchName=master"/>
</a>
</td>
</tr>
</table>
</div>
<!-- Related Projects -->
<div class="gap">
<h2 class="title">Related Projects</h2>
<p class="rowHeight">
Targeting at openness and advancing state-of-art technology,
<a href="https://www.microsoft.com/en-us/research/group/systems-and-networking-research-group-asia/">Microsoft Research (MSR)</a>
had also released few
other open source projects.</p>
<ul id="relatedProject">
<li class="rowHeight">
<a href="https://github.com/Microsoft/pai">OpenPAI</a> : an open source platform that provides complete AI model
training and resource management
capabilities, it is easy to extend and supports on-premise,
cloud and hybrid environments in various scale.
</li>
<li class="rowHeight">
<a href="https://github.com/Microsoft/frameworkcontroller">FrameworkController</a> : an open source
general-purpose Kubernetes Pod Controller that orchestrate
all kinds of applications on Kubernetes by a single controller.
</li>
<li class="rowHeight">
<a href="https://github.com/Microsoft/MMdnn">MMdnn</a> : A comprehensive, cross-framework solution to convert,
visualize and diagnose deep neural network
models. The "MM" in MMdnn stands for model management
and "dnn" is an acronym for deep neural network.
</li>
<li class="rowHeight">
<a href="https://github.com/Microsoft/SPTAG">SPTAG</a> : Space Partition Tree And Graph (SPTAG) is an open
source library
for large scale vector approximate nearest neighbor search scenario.
</li>
<li class="rowHeight">
<a href="https://github.com/Microsoft/SPTAG">nn-Meter</a> : An accurate inference latency predictor for DNN models on diverse edge devices.
</li>
</ul>
<p>We encourage researchers and students leverage these projects to accelerate the AI development and research.</p>
</div>
<!-- License -->
<div>
<h2 class="title">License</h2>
<p>The entire codebase is under <a href="https://github.com/microsoft/nni/blob/master/LICENSE">MIT license</a></p>
</div>
</div>
</div>
NNI eases the effort to scale and manage AutoML experiments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. codesnippetcard::
:icon: ../img/thumbnails/training-service-small.svg
:title: Training Service
:link: experiment/training_service/overview
:seemore: See more here.
An AutoML experiment requires many trials to explore feasible and potentially good-performing models.
**Training service** aims to make the tuning process easily scalable in a distributed platforms.
It provides a unified user experience for diverse computation resources (e.g., local machine, remote servers, AKS).
Currently, NNI supports **more than 9** kinds of training services.
.. codesnippetcard::
:icon: ../img/thumbnails/web-portal-small.svg
:title: Web Portal
:link: experiment/web_portal/web_portal
:seemore: See more here.
Web portal visualizes the tuning process, exposing the ability to inspect, monitor and control the experiment.
.. image:: ../static/img/webui.gif
:width: 100%
.. codesnippetcard::
:icon: ../img/thumbnails/experiment-management-small.svg
:title: Experiment Management
:link: experiment/experiment_management
:seemore: See more here.
The DNN model tuning often requires more than one experiment.
Users might try different tuning algorithms, fine-tune their search space, or switch to another training service.
**Experiment management** provides the power to aggregate and compare tuning results from multiple experiments,
so that the tuning workflow becomes clean and organized.
Get Support and Contribute Back
-------------------------------
NNI is maintained on the `NNI GitHub repository <https://github.com/microsoft/nni>`_. We collect feedbacks and new proposals/ideas on GitHub. You can:
* Open a `GitHub issue <https://github.com/microsoft/nni/issues>`_ for bugs and feature requests.
* Open a `pull request <https://github.com/microsoft/nni/pulls>`_ to contribute code (make sure to read the :doc:`contribution guide <notes/contributing>` before doing this).
* Participate in `NNI Discussion <https://github.com/microsoft/nni/discussions>`_ for general questions and new ideas.
* Join the following IM groups.
.. list-table::
:header-rows: 1
:widths: auto
* - Gitter
- WeChat
* -
.. image:: https://user-images.githubusercontent.com/39592018/80665738-e0574a80-8acc-11ea-91bc-0836dc4cbf89.png
-
.. image:: https://github.com/scarlett2018/nniutil/raw/master/wechat.png
Citing NNI
----------
If you use NNI in a scientific publication, please consider citing NNI in your references.
Microsoft. Neural Network Intelligence (version |release|). https://github.com/microsoft/nni
Bibtex entry (please replace the version with the particular version you are using): ::
@software{nni2021,
author = {{Microsoft}},
month = {1},
title = {{Neural Network Intelligence}},
url = {https://github.com/microsoft/nni},
version = {2.0},
year = {2021}
}
.. 1c1500ed177d6b4badecd72037a24a30
###########################
Neural Network Intelligence
###########################
.. toctree::
:maxdepth: 2
:titlesonly:
:hidden:
概述<Overview>
安装 <installation>
入门<Tutorial/QuickStart>
教程<tutorials>
自动(超参数)调优 <hyperparameter_tune>
神经网络架构搜索<nas>
模型压缩<model_compression>
特征工程<feature_engineering>
参考<reference>
示例与解决方案<CommunitySharings/community_sharings>
研究和出版物 <ResearchPublications>
常见问题 <Tutorial/FAQ>
如何贡献 <contribution>
更改日志 <Release>
.. dbd41cab307bcd76cc747b3d478709b8
NNI 文档
=================
.. toctree::
:maxdepth: 2
:caption: 开始使用
:hidden:
安装 <installation>
快速入门 <quickstart>
.. toctree::
:maxdepth: 2
:caption: 用户指南
:hidden:
超参调优 <hpo/toctree>
架构搜索 <nas/toctree>
模型压缩 <compression/toctree>
特征工程 <feature_engineering/toctree>
实验管理 <experiment/toctree>
.. toctree::
:maxdepth: 2
:caption: 参考
:hidden:
Python API <reference/python_api>
实验配置 <reference/experiment_config>
nnictl 命令 <reference/nnictl>
.. toctree::
:maxdepth: 2
:caption: 杂项
:hidden:
示例 <examples>
社区分享 <sharings/community_sharings>
研究发布 <notes/research_publications>
源码安装 <notes/build_from_source>
贡献指南 <notes/contributing>
版本说明 <release>
**NNI (Neural Network Intelligence)** 是一个轻量而强大的工具,可以帮助用户 **自动化**
* :doc:`超参调优 </hpo/overview>`
* :doc:`架构搜索 </nas/overview>`
* :doc:`模型压缩 </compression/overview>`
* :doc:`特征工程 </feature_engineering/overview>`
开始使用
-----------
安装最新的版本,可执行以下命令:
.. code-block:: bash
$ pip install nni
如果在安装上遇到问题,可参考 :doc:`安装指南 </installation>`
开始你的第一个 NNI 实验
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: shell
$ nnictl hello
.. note:: 你需要预先安装 `PyTorch <https://pytorch.org/>`_ (以及 `torchvision <https://pytorch.org/vision/stable/index.html>`_ )才能运行这个实验。
请阅读 :doc:`NNI 快速入门 <quickstart>` 以开启你的 NNI 旅程!
为什么选择 NNI
--------------------
NNI 使得自动机器学习技术即插即用
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. raw:: html
<div class="rowHeight">
<div class="chinese"><a href="https://nni.readthedocs.io/zh/stable/">English</a></div>
<b>NNI (Neural Network Intelligence)</b> 是一个轻量但强大的工具包,帮助用户<b>自动</b>的进行
<a href="FeatureEngineering/Overview.html">特征工程</a>,<a href="NAS/Overview.html">神经网络架构搜索</a>, <a href="Tuner/BuiltinTuner.html">超参调优</a>以及<a href="Compression/Overview.html">模型压缩</a>。
</div>
<p class="gap rowHeight">
NNI 管理自动机器学习 (AutoML) 的 Experiment,
<b>调度运行</b>
由调优算法生成的 Trial 任务来找到最好的神经网络架构和/或超参,支持
<b>各种训练环境</b>,如
<a href="TrainingService/LocalMode.html">本机</a>,
<a href="TrainingService/RemoteMachineMode.html">远程服务器</a>,
<a href="TrainingService/PaiMode.html">OpenPAI</a>,
<a href="TrainingService/KubeflowMode.html">Kubeflow</a>,
<a href="TrainingService/FrameworkControllerMode.html">基于 K8S 的 FrameworkController(如,AKS 等)</a>,
<a href="TrainingService/DLTSMode.html">DLWorkspace (又称 DLTS)</a>,
<a href="TrainingService/AMLMode.html">AML (Azure Machine Learning)</a>
以及其它云服务。
</p>
<!-- Who should consider using NNI -->
<div>
<h1 class="title">使用场景</h1>
<ul>
<li>想要在自己的代码、模型中试验<b>不同的自动机器学习算法</b>。</li>
<li>想要在<b>不同的环境中</b>加速运行自动机器学习。</li>
<li>想要更容易<b>实现或试验新的自动机器学习算法</b>的研究员或数据科学家,包括:超参调优算法,神经网络搜索算法以及模型压缩算法。
</li>
<li>在机器学习平台中<b>支持自动机器学习</b>。</li>
</ul>
</div>
<!-- nni release to version -->
<div class="inline gap">
<h3><a href="https://github.com/microsoft/nni/releases">NNI v2.6 已发布!</a></h3>
<img width="48" src="_static/img/release_icon.png">
</div>
<!-- NNI capabilities in a glance -->
<div class="gap">
<h1 class="title">NNI 功能一览</h1>
<p class="rowHeight">
NNI 提供命令行工具以及友好的 WebUI 来管理训练的 Experiment。
通过可扩展的 API,可定制自动机器学习算法和训练平台。
为了方便新用户,NNI 内置了最新的自动机器学习算法,并为流行的训练平台提供了开箱即用的支持。
</p>
<p class="rowHeight">
下表中,包含了 NNI 的功能,同时在不断地增添新功能,也非常希望您能贡献其中。
</p>
</div>
<p align="center">
<a href="#overview"><img src="_static/img/overview.svg" /></a>
</p>
<table class="main-table">
<tbody>
<tr align="center" valign="bottom" class="column">
<td></td>
<td class="framework">
<b>框架和库</b>
</td>
<td>
<b>算法</b>
</td>
<td>
<b>训练平台</b>
</td>
</tr>
</tr>
<tr>
<td class="verticalMiddle"><b>内置</b></td>
<td>
<ul class="firstUl">
<li><b>支持的框架</b></li>
<ul class="circle">
<li>PyTorch</li>
<li>Keras</li>
<li>TensorFlow</li>
<li>MXNet</li>
<li>Caffe2</li>
<a href="SupportedFramework_Library.html">更多...</a><br />
</ul>
</ul>
<ul class="firstUl">
<li><b>支持的库</b></li>
<ul class="circle">
<li>Scikit-learn</li>
<li>XGBoost</li>
<li>LightGBM</li>
<a href="SupportedFramework_Library.html">更多...</a><br />
</ul>
</ul>
<ul class="firstUl">
<li><b>示例</b></li>
<ul class="circle">
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-pytorch">MNIST-pytorch</li>
</a>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-tfv2">MNIST-tensorflow</li>
</a>
<li><a href="https://github.com/microsoft/nni/tree/master/examples/trials/mnist-keras">MNIST-keras</li></a>
<li><a href="TrialExample/GbdtExample.html">Auto-gbdt</a></li>
<li><a href="TrialExample/Cifar10Examples.html">Cifar10-pytorch</li></a>
<li><a href="TrialExample/SklearnExamples.html">Scikit-learn</a></li>
<li><a href="TrialExample/EfficientNet.html">EfficientNet</a></li>
<li><a href="TrialExample/OpEvoExamples.html">GPU Kernel 调优</li></a>
<a href="SupportedFramework_Library.html">更多...</a><br />
</ul>
</ul>
</td>
<td align="left">
<a href="Tuner/BuiltinTuner.html">超参调优</a>
<ul class="firstUl">
<div><b>穷举搜索</b></div>
<ul class="circle">
<li><a href="Tuner/BuiltinTuner.html#Random">Random Search(随机搜索)</a></li>
<li><a href="Tuner/BuiltinTuner.html#GridSearch">Grid Search(遍历搜索)</a></li>
<li><a href="Tuner/BuiltinTuner.html#Batch">Batch(批处理)</a></li>
</ul>
<div><b>启发式搜索</b></div>
<ul class="circle">
<li><a href="Tuner/BuiltinTuner.html#Evolution">Naïve Evolution(朴素进化)</a></li>
<li><a href="Tuner/BuiltinTuner.html#Anneal">Anneal(退火算法)</a></li>
<li><a href="Tuner/BuiltinTuner.html#Hyperband">Hyperband</a></li>
<li><a href="Tuner/BuiltinTuner.html#PBTTuner">P-DARTS</a></li>
</ul>
<div><b>贝叶斯优化</b></div>
<ul class="circle">
<li><a href="Tuner/BuiltinTuner.html#BOHB">BOHB</a></li>
<li><a href="Tuner/BuiltinTuner.html#TPE">TPE</a></li>
<li><a href="Tuner/BuiltinTuner.html#SMAC">SMAC</a></li>
<li><a href="Tuner/BuiltinTuner.html#MetisTuner">Metis Tuner</a></li>
<li><a href="Tuner/BuiltinTuner.html#GPTuner">GP Tuner</a> </li>
<li><a href="Tuner/BuiltinTuner.html#DNGOTuner">PPO Tuner</a></li>
</ul>
</ul>
<a href="NAS/Overview.html">神经网络架构搜索</a>
<ul class="firstUl">
<ul class="circle">
<li><a href="NAS/ENAS.html">ENAS</a></li>
<li><a href="NAS/DARTS.html">DARTS</a></li>
<li><a href="NAS/SPOS.html">SPOS</a></li>
<li><a href="NAS/Proxylessnas.html">ProxylessNAS</a></li>
<li><a href="NAS/FBNet.html">微信</a></li>
<li><a href="NAS/ExplorationStrategies.html">基于强化学习</a></li>
<li><a href="NAS/ExplorationStrategies.html">Network Morphism</a></li>
<li><a href="NAS/Overview.html">TextNAS</a></li>
</ul>
</ul>
<a href="Compression/Overview.html">模型压缩</a>
<ul class="firstUl">
<div><b>剪枝</b></div>
<ul class="circle">
<li><a href="Compression/Pruner.html#agp-pruner">AGP Pruner</a></li>
<li><a href="Compression/Pruner.html#slim-pruner">Slim Pruner</a></li>
<li><a href="Compression/Pruner.html#fpgm-pruner">FPGM Pruner</a></li>
<li><a href="Compression/Pruner.html#netadapt-pruner">NetAdapt Pruner</a></li>
<li><a href="Compression/Pruner.html#simulatedannealing-pruner">SimulatedAnnealing Pruner</a></li>
<li><a href="Compression/Pruner.html#admm-pruner">ADMM Pruner</a></li>
<li><a href="Compression/Pruner.html#autocompress-pruner">AutoCompress Pruner</a></li>
<li><a href="Compression/Overview.html">更多...</a></li>
</ul>
<div><b>量化</b></div>
<ul class="circle">
<li><a href="Compression/Quantizer.html#qat-quantize">QAT Quantizer</a></li>
<li><a href="Compression/Quantizer.html#dorefa-quantizer">DoReFa Quantizer</a></li>
<li><a href="Compression/Quantizer.html#bnn-quantizer">BNN Quantizer</a></li>
</ul>
</ul>
<a href="FeatureEngineering/Overview.html">特征工程(测试版)</a>
<ul class="circle">
<li><a href="FeatureEngineering/GradientFeatureSelector.html">GradientFeatureSelector</a></li>
<li><a href="FeatureEngineering/GBDTSelector.html">GBDTSelector</a></li>
</ul>
<a href="Assessor/BuiltinAssessor.html">提前终止算法</a>
<ul class="circle">
<li><a href="Assessor/BuiltinAssessor.html#MedianStop">Median Stop(中位数终止)</a></li>
<li><a href="Assessor/BuiltinAssessor.html#Curvefitting">Curve Fitting(曲线拟合)</a></li>
</ul>
</td>
<td>
<ul class="firstUl">
<li><a href="TrainingService/LocalMode.html">本机</a></li>
<li><a href="TrainingService/RemoteMachineMode.html">远程计算机</a></li>
<li><a href="TrainingService/HybridMode.html">混合模式</a></li>
<li><a href="TrainingService/AMLMode.html">AML(Azure Machine Learning)</a></li>
<li><b>基于 Kubernetes 的平台</b></li>
<ul>
<li><a href="TrainingService/PaiMode.html">OpenPAI</a></li>
<li><a href="TrainingService/KubeflowMode.html">Kubeflow</a></li>
<li><a href="TrainingService/FrameworkControllerMode.html">基于 K8S 的 FrameworkController (如 AKS 等)</a></li>
<li><a href="TrainingService/DLTSMode.html">DLWorkspace (又称 DLTS)</a></li>
<li><a href="TrainingService/AdaptDLMode.html">AML (Azure Machine Learning)</a></li>
</ul>
</ul>
</td>
</tr>
<tr valign="top">
<td class="verticalMiddle"><b>参考</b></td>
<td>
<ul class="firstUl">
<li><a href="Tutorial/HowToLaunchFromPython.html">Python API</a></li>
<li><a href="Tutorial/AnnotationSpec.html">NNI Annotation</a></li>
<li><a href="installation.html">支持的操作系统</a></li>
</ul>
</td>
<td>
<ul class="firstUl">
<li><a href="Tuner/CustomizeTuner.html">自定义 Tuner</a></li>
<li><a href="Assessor/CustomizeAssessor.html">自定义 Assessor</a></li>
<li><a href="Tutorial/InstallCustomizedAlgos.html">安装自定义的 Tuner,Assessor,Advisor</a></li>
<li><a href="NAS/QuickStart.html">定义 NAS 模型空间</a></li>
<li><a href="NAS/ApiReference.html">NAS/Retiarii APIs</a></li>
</ul>
</td>
<td>
<ul class="firstUl">
<li><a href="TrainingService/Overview.html">支持训练平台</a></li>
<li><a href="TrainingService/HowToImplementTrainingService.html">实现训练平台</a></li>
</ul>
</td>
</tr>
</tbody>
</table>
<!-- Installation -->
<div>
<h1 class="title">安装</h1>
<div>
<h2 class="second-title">安装</h2>
<p>
NNI 支持并在 Ubuntu >= 16.04, macOS >= 10.14.1, 和 Windows 10 >= 1809 通过了测试。 在 <code>python 64-bit >= 3.6</code> 的环境中,只需要运行 <code>pip install</code> 即可完成安装。
</p>
<div class="command-intro">Linux 或 macOS</div>
<div class="command">python3 -m pip install --upgrade nni</div>
<div class="command-intro">Windows</div>
<div class="command">python -m pip install --upgrade nni</div>
<p class="topMargin">如果想要尝试最新代码,可通过源代码<a href="installation.html">安装
NNI</a>。
</p>
<p>Linux 和 macOS 下 NNI 系统需求<a href="Tutorial/InstallationLinux.html">参考这里</a>,Windows <a href="Tutorial/InstallationWin.html">参考这里</a>。</p>
</div>
<div>
<p>注意:</p>
<ul>
<li>如果遇到任何权限问题,可添加 --user 在用户目录中安装 NNI。</li>
<li>目前,Windows 上的 NNI 支持本机,远程和 OpenPAI 模式。 强烈推荐使用 Anaconda 或 Miniconda <a href="Tutorial/InstallationWin.html">在 Windows 上安装 NNI</a>。</li>
<li>如果遇到如 Segmentation fault 这样的任何错误请参考 <a
href="installation.html">常见问题</a>。 Windows 上的常见问题,参考在 <a href="Tutorial/InstallationWin.html">Windows 上使用 NNI</a>。 Windows 上的常见问题,参考在 <a href="Tutorial/InstallationWin.html">Windows 上使用 NNI</a>。</li>
</ul>
</div>
<div>
<h2 class="second-title">验证安装</h2>
<p>
以下示例基于 TensorFlow 1.x 构建。 确保运行环境中使用的是 <b>TensorFlow 1.x</b>。
</p>
<ul>
<li>
<p>通过克隆源代码下载示例。</p>
<div class="command">git clone -b v2.6 https://github.com/Microsoft/nni.git</div>
</li>
<li>
<p>运行 MNIST 示例。</p>
<div class="command-intro">Linux 或 macOS</div>
<div class="command">nnictl create --config nni/examples/trials/mnist-tfv1/config.yml</div>
<div class="command-intro">Windows</div>
<div class="command">nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml</div>
</li>
<li>
<p>
在命令行中等待输出 INFO: Successfully started experiment!
此消息表明 Experiment 已成功启动。
通过命令行输出的 Web UI url 来访问 Experiment 的界面。
</p>
<!-- Indentation affects style! -->
<pre class="main-code">
INFO: Starting restful server...
INFO: Successfully started Restful server!
INFO: Setting local config...
INFO: Successfully set local config!
INFO: Starting experiment...
INFO: Successfully started experiment!
-----------------------------------------------------------------------
The experiment id is egchD4qy
The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080
-----------------------------------------------------------------------
You can use these commands to get more information about the experiment
-----------------------------------------------------------------------
commands description
1. nnictl experiment show show the information of experiments
2. nnictl trial ls list all of trial jobs
3. nnictl top monitor the status of running experiments
4. nnictl log stderr show stderr log content
5. nnictl log stdout show stdout log content
6. nnictl stop stop an experiment
7. nnictl trial kill kill a trial job by id
8. nnictl --help get help information about nnictl
-----------------------------------------------------------------------
</pre>
</li>
<li class="rowHeight">
在浏览器中打开 Web UI 地址,可看到下图的 Experiment 详细信息,以及所有的 Trial 任务。 查看<a href="Tutorial/WebUI.html">这里的</a>更多页面示例。
<img src="_static/img/webui.gif" width="100%"/>
</div>
</li>
</ul>
</div>
<!-- Documentation -->
<div>
<h1 class="title">文档</h1>
<ul>
<li>要了解 NNI,请阅读 <a href="Overview.html">NNI 概述</a>。</li>
<li>要熟悉如何使用 NNI,请阅读<a href="index.html">文档</a>。</li>
<li>要安装 NNI,请参阅<a href="installation.html">安装 NNI</a>。</li>
</ul>
</div>
<!-- Contributing -->
<div>
<h1 class="title">贡献</h1>
<p>
本项目欢迎任何贡献和建议。 大多数贡献都需要你同意参与者许可协议(CLA),来声明你有权,并实际上授予我们有权使用你的贡献。
有关详细信息,请访问 <a href="https://cla.microsoft.com">https://cla.microsoft.com</a>。
</p>
<p>
当你提交拉取请求时,CLA 机器人会自动检查你是否需要提供 CLA,并修饰这个拉取请求(例如,标签、注释)。 只需要按照机器人提供的说明进行操作即可。 CLA 只需要同意一次,就能应用到所有的代码仓库上。
</p>
<p>
该项目采用了 <a href="https://opensource.microsoft.com/codeofconduct/">Microsoft 开源行为准则 </a>。 有关详细信息,请参阅<a href="https://opensource.microsoft.com/codeofconduct/faq/">行为守则常见问题解答</a>或联系 <a
href="mailto:opencode@microsoft.com">opencode@microsoft.com</a> 咨询问题或评论。
</p>
<p>
熟悉贡献协议后,即可按照 NNI 开发人员教程,创建第一个 PR =) 了:
</p>
<ul>
<li>推荐新贡献者先从简单的问题开始:<a
href="https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22">'good first issue'</a> 或 <a
href="https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22">'help-wanted'</a>。
</li>
<li><a href="Tutorial/SetupNniDeveloperEnvironment.html">NNI 开发环境安装教程</a></li>
<li><a href="Tutorial/HowToDebug.html">如何调试</a></li>
<li>
如果有使用上的问题,可先查看<a href="Tutorial/FAQ.html">常见问题解答</a>。如果没能解决问题,可通过 <a
href="https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge">Gitter</a>
联系 NNI 开发团队或在 GitHub 上<a href="https://github.com/microsoft/nni/issues/new/choose">报告问题</a>。
</li>
<li><a href="Tuner/CustomizeTuner.html">自定义 Tuner</a></li>
<li><a href="TrainingService/HowToImplementTrainingService.html">实现定制的训练平台</a>
</li>
<li><a href="NAS/Advanced.html">在 NNI 上实现新的 NAS Trainer</a></li>
<li><a href="Tuner/CustomizeAdvisor.html">自定义 Advisor</a></li>
</ul>
</div>
<!-- External Repositories and References -->
<div>
<h1 class="title">其它代码库和参考</h1>
<p>经作者许可的一些 NNI 用法示例和相关文档。</p>
<ul>
<h2>外部代码库</h2>
<li>在 NNI 中运行 <a href="NAS/ENAS.html">ENAS</a></li>
<li>
https://github.com/microsoft/nni/blob/master/examples/feature_engineering/auto-feature-engineering/README_zh_CN.md
</li>
<li>使用 NNI 的 <a
href="https://github.com/microsoft/recommenders/blob/master/examples/04_model_select_and_optimize/nni_surprise_svd.ipynb">矩阵分解超参调优</a></li>
<li><a href="https://github.com/ksachdeva/scikit-nni">scikit-nni</a> 使用 NNI 为 scikit-learn 开发的超参搜索。</li>
</ul>
<!-- Relevant Articles -->
<ul>
<h2>相关文章</h2>
<li><a href="CommunitySharings/HpoComparison.html">超参数优化的对比</a></li>
<li><a href="CommunitySharings/NasComparison.html">神经网络结构搜索的对比</a></li>
<li><a href="CommunitySharings/ParallelizingTpeSearch.html">并行化顺序算法:TPE</a>
</li>
<li><a href="CommunitySharings/RecommendersSvd.html">使用 NNI 为 SVD 自动调参</a></li>
<li><a href="CommunitySharings/SptagAutoTune.html">使用 NNI 为 SPTAG 自动调参</a></li>
<li><a
href="https://towardsdatascience.com/find-thy-hyper-parameters-for-scikit-learn-pipelines-using-microsoft-nni-f1015b1224c1">
使用 NNI 为 scikit-learn 查找超参
</a></li>
<li>
<strong>博客</strong> - <a
href="http://gaocegege.com/Blog/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/katib-new#%E6%80%BB%E7%BB%93%E4%B8%8E%E5%88%86%E6%9E%90">AutoML 工具(Advisor,NNI 与 Google Vizier)的对比</a> 作者:@gaocegege - kubeflow/katib 的设计与实现的总结与分析章节
</li>
<li>
Blog (中文) - <a href="https://mp.weixin.qq.com/s/7_KRT-rRojQbNuJzkjFMuA">NNI 2019 新功能汇总</a> by @squirrelsc
</li>
</ul>
</div>
<!-- feedback -->
<div>
<h1 class="title">反馈</h1>
<ul>
<li><a href="https://github.com/microsoft/nni/issues/new/choose">在 GitHub 上提交问题</a>。</li>
<li>在 <a
href="https://stackoverflow.com/questions/tagged/nni?sort=Newest&edited=true">Stack Overflow</a> 上使用 nni 标签提问。
</li>
<li>在 <a
href="https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge">Gitter</a> 中参与讨论。</li>
</ul>
<div>
<div>加入聊天组:</div>
<table border=1 style="border-collapse: collapse;">
<tbody>
<tr style="line-height: 30px;">
<th>Gitter</th>
<td></td>
<th>微信</th>
</tr>
<tr>
<td class="QR">
<img src="https://user-images.githubusercontent.com/39592018/80665738-e0574a80-8acc-11ea-91bc-0836dc4cbf89.png" alt="Gitter" />
</td>
<td width="80" align="center" class="or">或</td>
<td class="QR">
<img src="https://github.com/scarlett2018/nniutil/raw/master/wechat.png" alt="NNI 微信" />
</td>
</tr>
</tbody>
</table>
</div>
</div>
<!-- Related Projects -->
<div>
<h1 class="title">相关项目</h1>
<p>
以探索先进技术和开放为目标,<a href="https://www.microsoft.com/zh-cn/research/group/systems-and-networking-research-group-asia/">Microsoft Research (MSR)</a> 还发布了一些相关的开源项目。</p>
<ul id="relatedProject">
<li>
<a href="https://github.com/Microsoft/pai">OpenPAI</a>:作为开源平台,提供了完整的 AI 模型训练和资源管理能力,能轻松扩展,并支持各种规模的私有部署、云和混合环境。
</li>
<li>
<a href="https://github.com/Microsoft/frameworkcontroller">FrameworkController</a>:开源的通用 Kubernetes Pod 控制器,通过单个控制器来编排 Kubernetes 上所有类型的应用。
</li>
<li>
<a href="https://github.com/Microsoft/MMdnn">MMdnn</a>:一个完整、跨框架的解决方案,能够转换、可视化、诊断深度神经网络模型。 MMdnn 中的 "MM" 表示 model management(模型管理),而 "dnn" 是 deep neural network(深度神经网络)的缩写。
</li>
<li>
<a href="https://github.com/Microsoft/SPTAG">SPTAG</a> : Space Partition Tree And Graph (SPTAG) 是用于大规模向量的最近邻搜索场景的开源库。
</li>
</ul>
<p>我们鼓励研究人员和学生利用这些项目来加速 AI 开发和研究。</p>
</div>
<!-- License -->
<div>
<h1 class="title">许可协议</h1>
<p>代码库遵循 <a href="https://github.com/microsoft/nni/blob/master/LICENSE">MIT 许可协议</a></p>
</div>
</div>
\ No newline at end of file
<div class="codesnippet-card-container">
.. codesnippetcard::
:icon: ../img/thumbnails/hpo-small.svg
:title: 超参调优
:link: tutorials/hpo_quickstart_pytorch/main
:seemore: 点这里阅读完整教程
.. code-block::
params = nni.get_next_parameter()
class Net(nn.Module):
...
model = Net()
optimizer = optim.SGD(model.parameters(),
params['lr'],
params['momentum'])
for epoch in range(10):
train(...)
accuracy = test(model)
nni.report_final_result(accuracy)
.. codesnippetcard::
:icon: ../img/thumbnails/pruning-small.svg
:title: 模型剪枝
:link: tutorials/pruning_quick_start_mnist
:seemore: 点这里阅读完整教程
.. code-block::
# define a config_list
config = [{
'sparsity': 0.8,
'op_types': ['Conv2d']
}]
# generate masks for simulated pruning
wrapped_model, masks = \
L1NormPruner(model, config). \
compress()
# apply the masks for real speedup
ModelSpeedup(unwrapped_model, input, masks). \
speedup_model()
.. codesnippetcard::
:icon: ../img/thumbnails/quantization-small.svg
:title: 模型量化
:link: tutorials/quantization_speedup
:seemore: 点这里阅读完整教程
.. code-block::
# define a config_list
config = [{
'quant_types': ['input', 'weight'],
'quant_bits': {'input': 8, 'weight': 8},
'op_types': ['Conv2d']
}]
# in case quantizer needs a extra training
quantizer = QAT_Quantizer(model, config)
quantizer.compress()
# Training...
# export calibration config and
# generate TensorRT engine for real speedup
calibration_config = quantizer.export_model(
model_path, calibration_path)
engine = ModelSpeedupTensorRT(
model, input_shape, config=calib_config)
engine.compress()
.. codesnippetcard::
:icon: ../img/thumbnails/multi-trial-nas-small.svg
:title: 神经网络架构搜索
:link: tutorials/hello_nas
:seemore: 点这里阅读完整教程
.. code-block::
# define model space
- self.conv2 = nn.Conv2d(32, 64, 3, 1)
+ self.conv2 = nn.LayerChoice([
+ nn.Conv2d(32, 64, 3, 1),
+ DepthwiseSeparableConv(32, 64)
+ ])
# search strategy + evaluator
strategy = RegularizedEvolution()
evaluator = FunctionalEvaluator(
train_eval_fn)
# run experiment
RetiariiExperiment(model_space,
evaluator, strategy).run()
.. codesnippetcard::
:icon: ../img/thumbnails/one-shot-nas-small.svg
:title: 单尝试 (One-shot) NAS
:link: nas/exploration_strategy
:seemore: 点这里阅读完整教程
.. code-block::
# define model space
space = AnySearchSpace()
# get a darts trainer
trainer = DartsTrainer(space, loss, metrics)
trainer.fit()
# get final searched architecture
arch = trainer.export()
.. codesnippetcard::
:icon: ../img/thumbnails/feature-engineering-small.svg
:title: 特征工程
:link: feature_engineering/overview
:seemore: 点这里阅读完整教程
.. code-block::
selector = GBDTSelector()
selector.fit(
X_train, y_train,
lgb_params=lgb_params,
eval_ratio=eval_ratio,
early_stopping_rounds=10,
importance_type='gain',
num_boost_round=1000)
# get selected features
features = selector.get_selected_features()
.. End of code snippet card
.. raw:: html
</div>
NNI 可降低自动机器学习实验管理的成本
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. codesnippetcard::
:icon: ../img/thumbnails/training-service-small.svg
:title: 训练平台
:link: experiment/training_service/overview
:seemore: 点这里了解更多
一个自动机器学习实验通常需要很多次尝试,来找到合适且具有潜力的模型。
**训练平台** 的目标便是让整个调优过程可以轻松的扩展到分布式平台上,为不同的计算资源(例如本地机器、远端服务器、集群等)提供的统一的用户体验。
目前,NNI 已经支持 **超过九种** 训练平台。
.. codesnippetcard::
:icon: ../img/thumbnails/web-portal-small.svg
:title: 网页控制台
:link: experiment/web_portal/web_portal
:seemore: 点这里了解更多
网页控制台提供了可视化调优过程的能力,让你可以轻松检查、跟踪、控制实验流程。
.. image:: ../static/img/webui.gif
:width: 100%
.. codesnippetcard::
:icon: ../img/thumbnails/experiment-management-small.svg
:title: 多实验管理
:link: experiment/experiment_management
:seemore: 点这里了解更多
深度学习模型往往需要多个实验不断迭代,例如用户可能想尝试不同的调优算法,优化他们的搜索空间,或者切换到其他的计算资源。
**多实验管理** 提供了对多个实验的结果进行聚合和比较的强大能力,极大程度上简化了开发者的开发流程。
获取帮助或参与贡献
-------------------------------
NNI 使用 `NNI GitHub 仓库 <https://github.com/microsoft/nni>`_ 进行维护。我们在 GitHub 上收集反馈,以及新需求和想法。你可以:
* 新建一个 `GitHub issue <https://github.com/microsoft/nni/issues>`_ 反馈一个 bug 或者需求。
* 新建一个 `pull request <https://github.com/microsoft/nni/pulls>`_ 以贡献代码(在此之前,请务必确保你已经阅读过 :doc:`贡献指南 <notes/contributing>`)。
* 如果你有任何问题,都可以加入 `NNI 讨论 <https://github.com/microsoft/nni/discussions>`_
* 加入即时聊天群组:
.. list-table::
:header-rows: 1
:widths: auto
* - Gitter
- 微信
* -
.. image:: https://user-images.githubusercontent.com/39592018/80665738-e0574a80-8acc-11ea-91bc-0836dc4cbf89.png
-
.. image:: https://github.com/scarlett2018/nniutil/raw/master/wechat.png
引用 NNI
----------
如果你在你的文献中用到了 NNI,请考虑引用我们:
Microsoft. Neural Network Intelligence (version |release|). https://github.com/microsoft/nni
Bibtex 格式如下(请将版本号替换成你在使用的特定版本): ::
@software{nni2021,
author = {{Microsoft}},
month = {1},
title = {{Neural Network Intelligence}},
url = {https://github.com/microsoft/nni},
version = {2.0},
year = {2021}
}
############
Installation
############
Install NNI
===========
Currently we support installation on Linux, Mac and Windows. We also allow you to use docker.
NNI requires Python >= 3.7.
It is tested and supported on Ubuntu >= 18.04,
Windows 10 >= 21H2, and macOS >= 11.
.. toctree::
:maxdepth: 2
There are 3 ways to install NNI:
Linux & Mac <Tutorial/InstallationLinux>
Windows <Tutorial/InstallationWin>
Use Docker <Tutorial/HowToUseDocker>
\ No newline at end of file
* :ref:`Using pip <installation-pip>`
* :ref:`Build source code <installation-source>`
* :ref:`Using Docker <installation-docker>`
.. _installation-pip:
Using pip
---------
NNI provides official packages for x86-64 CPUs. They can be installed with pip:
.. code-block:: text
pip install nni
Or to upgrade to latest version:
.. code-block:: text
pip install --latest nni
You can check installation with:
.. code-block:: text
nnictl --version
On Linux systems without Conda, you may encounter ``bash: nnictl: command not found`` error.
In this case you need to add pip script directory to ``PATH``:
.. code-block:: bash
echo 'export PATH=${PATH}:${HOME}/.local/bin' >> ~/.bashrc
source ~/.bashrc
.. _installation-source:
Installing from Source Code
---------------------------
NNI hosts source code on `GitHub <https://github.com/microsoft/nni>`__.
NNI has experimental support for ARM64 CPUs, including Apple M1.
It requires to install from source code.
See :doc:`/notes/build_from_source`.
.. _installation-docker:
Using Docker
------------
NNI provides official Docker image on `Docker Hub <https://hub.docker.com/r/msranni/nni>`__.
.. code-block:: text
docker pull msranni/nni
Installing Extra Dependencies
-----------------------------
Some built-in algorithms of NNI requires extra packages.
Use ``nni[<algorithm-name>]`` to install their dependencies.
For example, to install dependencies of :class:`DNGO tuner<nni.algorithms.hpo.dngo_tuner.DNGOTuner>` :
.. code-block:: text
pip install nni[DNGO]
This command will not reinstall NNI itself, even if it was installed in development mode.
Alternatively, you may install all extra dependencies at once:
.. code-block:: text
pip install nni[all]
**NOTE**: SMAC tuner depends on swig3, which requires a manual downgrade on Ubuntu:
.. code-block:: bash
sudo apt install swig3.0
sudo rm /usr/bin/swig
sudo ln -s swig3.0 /usr/bin/swig
.. c62173d7147a43a13bf2cdf945b82d07
.. b4703fc8c8e8dc1babdb38ba9ebcd4a6
############
安装
############
安装 NNI
========
当前支持在 Linux,macOS 和 Windows 下安装。 还可使用 Docker
NNI 依赖于 Python 3.7 或以上版本
.. toctree::
:maxdepth: 2
您可以通过以下三种方式之一安装 NNI:
Linux 和 macOS <Tutorial/InstallationLinux>
Windows <Tutorial/InstallationWin>
使用 Docker <Tutorial/HowToUseDocker>
\ No newline at end of file
* :ref:`通过 pip 安装<zh-installation-pip>`
* :ref:`从源代码编译安装<zh-installation-source>`
* :ref:`使用 Docker 容器<zh-installation-docker>`
.. _zh-installation-pip:
pip 安装
--------
NNI 为 x86-64 平台提供预编译的安装包,您可以使用 pip 进行安装:
.. code-block:: text
pip install nni
您也可以升级已安装的旧版本 NNI:
.. code-block:: text
pip install --latest nni
安装完成后,请运行以下命令进行检查:
.. code-block:: text
nnictl --version
如果您使用的是 Linux 系统并且没有使用 Conda,您可能会遇到 ``bash: nnictl: command not found`` 错误,
此时您需要将 pip 安装的可执行文件添加到 ``PATH`` 环境变量:
.. code-block:: bash
echo 'export PATH=${PATH}:${HOME}/.local/bin' >> ~/.bashrc
source ~/.bashrc
.. _zh-installation-source:
编译安装
--------
NNI 项目使用 `GitHub <https://github.com/microsoft/nni>`__ 托管源代码。
NNI 对 ARM64 平台(包括苹果 M1)提供实验性支持,如果您希望在此类平台上使用 NNI,请从源代码编译安装。
编译步骤请参见英文文档: :doc:`/notes/build_from_source`
.. _zh-installation-docker:
Docker 镜像
-----------
NNI 在 `Docker Hub <https://hub.docker.com/r/msranni/nni>`__ 上提供了官方镜像。
.. code-block:: text
docker pull msranni/nni
安装额外依赖
------------
有一些算法依赖于额外的 pip 包,在使用前需要先指定 ``nni[算法名]`` 安装依赖。以 DNGO 算法为例,使用前请运行以下命令:
.. code-block:: text
pip install nni[DNGO]
如果您已经通过任一种方式安装了 NNI,以上命令不会重新安装或改变 NNI 版本,只会安装 DNGO 算法的额外依赖。
您也可以一次性安装所有可选依赖:
.. code-block:: text
pip install nni[all]
**注意**:SMAC 算法依赖于 swig3,在 Ubuntu 系统中需要手动进行降级:
.. code-block:: bash
sudo apt install swig3.0
sudo rm /usr/bin/swig
sudo ln -s swig3.0 /usr/bin/swig
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-13 03:14+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/compression/overview.rst:2
msgid "Overview of NNI Model Compression"
msgstr ""
#: ../../source/compression/overview.rst:4
msgid ""
"Deep neural networks (DNNs) have achieved great success in many tasks "
"like computer vision, nature launguage processing, speech processing. "
"However, typical neural networks are both computationally expensive and "
"energy-intensive, which can be difficult to be deployed on devices with "
"low computation resources or with strict latency requirements. Therefore,"
" a natural thought is to perform model compression to reduce model size "
"and accelerate model training/inference without losing performance "
"significantly. Model compression techniques can be divided into two "
"categories: pruning and quantization. The pruning methods explore the "
"redundancy in the model weights and try to remove/prune the redundant and"
" uncritical weights. Quantization refers to compress models by reducing "
"the number of bits required to represent weights or activations. We "
"further elaborate on the two methods, pruning and quantization, in the "
"following chapters. Besides, the figure below visualizes the difference "
"between these two methods."
msgstr ""
#: ../../source/compression/overview.rst:19
msgid ""
"NNI provides an easy-to-use toolkit to help users design and use model "
"pruning and quantization algorithms. For users to compress their models, "
"they only need to add several lines in their code. There are some popular"
" model compression algorithms built-in in NNI. On the other hand, users "
"could easily customize their new compression algorithms using NNI’s "
"interface."
msgstr ""
#: ../../source/compression/overview.rst:24
msgid "There are several core features supported by NNI model compression:"
msgstr ""
#: ../../source/compression/overview.rst:26
msgid "Support many popular pruning and quantization algorithms."
msgstr ""
#: ../../source/compression/overview.rst:27
msgid ""
"Automate model pruning and quantization process with state-of-the-art "
"strategies and NNI's auto tuning power."
msgstr ""
#: ../../source/compression/overview.rst:28
msgid ""
"Speedup a compressed model to make it have lower inference latency and "
"also make it smaller."
msgstr ""
#: ../../source/compression/overview.rst:29
msgid ""
"Provide friendly and easy-to-use compression utilities for users to dive "
"into the compression process and results."
msgstr ""
#: ../../source/compression/overview.rst:30
msgid "Concise interface for users to customize their own compression algorithms."
msgstr ""
#: ../../source/compression/overview.rst:34
msgid "Compression Pipeline"
msgstr ""
#: ../../source/compression/overview.rst:42
msgid ""
"The overall compression pipeline in NNI is shown above. For compressing a"
" pretrained model, pruning and quantization can be used alone or in "
"combination. If users want to apply both, a sequential mode is "
"recommended as common practise."
msgstr ""
#: ../../source/compression/overview.rst:46
msgid ""
"Note that NNI pruners or quantizers are not meant to physically compact "
"the model but for simulating the compression effect. Whereas NNI speedup "
"tool can truly compress model by changing the network architecture and "
"therefore reduce latency. To obtain a truly compact model, users should "
"conduct :doc:`pruning speedup <../tutorials/pruning_speedup>` or "
":doc:`quantizaiton speedup <../tutorials/quantization_speedup>`. The "
"interface and APIs are unified for both PyTorch and TensorFlow. Currently"
" only PyTorch version has been supported, and TensorFlow version will be "
"supported in future."
msgstr ""
#: ../../source/compression/overview.rst:52
msgid "Model Speedup"
msgstr ""
#: ../../source/compression/overview.rst:54
msgid ""
"The final goal of model compression is to reduce inference latency and "
"model size. However, existing model compression algorithms mainly use "
"simulation to check the performance (e.g., accuracy) of compressed model."
" For example, using masks for pruning algorithms, and storing quantized "
"values still in float32 for quantization algorithms. Given the output "
"masks and quantization bits produced by those algorithms, NNI can really "
"speedup the model."
msgstr ""
#: ../../source/compression/overview.rst:59
msgid "The following figure shows how NNI prunes and speeds up your models."
msgstr ""
#: ../../source/compression/overview.rst:67
msgid ""
"The detailed tutorial of Speedup Model with Mask can be found :doc:`here "
"<../tutorials/pruning_speedup>`. The detailed tutorial of Speedup Model "
"with Calibration Config can be found :doc:`here "
"<../tutorials/quantization_speedup>`."
msgstr ""
#: ../../source/compression/overview.rst:72
msgid ""
"NNI's model pruning framework has been upgraded to a more powerful "
"version (named pruning v2 before nni v2.6). The old version (`named "
"pruning before nni v2.6 "
"<https://nni.readthedocs.io/en/v2.6/Compression/pruning.html>`_) will be "
"out of maintenance. If for some reason you have to use the old pruning, "
"v2.6 is the last nni version to support old pruning version."
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-20 05:50+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/hpo/overview.rst:2
msgid "Hyperparameter Optimization Overview"
msgstr ""
#: ../../source/hpo/overview.rst:4
msgid ""
"Auto hyperparameter optimization (HPO), or auto tuning, is one of the key"
" features of NNI."
msgstr ""
#: ../../source/hpo/overview.rst:7
msgid "Introduction to HPO"
msgstr ""
#: ../../source/hpo/overview.rst:9
msgid ""
"In machine learning, a hyperparameter is a parameter whose value is used "
"to control learning process, and HPO is the problem of choosing a set of "
"optimal hyperparameters for a learning algorithm. (`From "
"<https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)>`__ "
"`Wikipedia "
"<https://en.wikipedia.org/wiki/Hyperparameter_optimization>`__)"
msgstr ""
#: ../../source/hpo/overview.rst:14
msgid "Following code snippet demonstrates a naive HPO process:"
msgstr ""
#: ../../source/hpo/overview.rst:34
msgid ""
"You may have noticed, the example will train 4×10×3=120 models in total. "
"Since it consumes so much computing resources, you may want to:"
msgstr ""
#: ../../source/hpo/overview.rst:37
msgid ""
":ref:`Find the best hyperparameter set with less iterations. <hpo-"
"overview-tuners>`"
msgstr ""
#: ../../source/hpo/overview.rst:38
msgid ":ref:`Train the models on distributed platforms. <hpo-overview-platforms>`"
msgstr ""
#: ../../source/hpo/overview.rst:39
msgid ""
":ref:`Have a portal to monitor and control the process. <hpo-overview-"
"portal>`"
msgstr ""
#: ../../source/hpo/overview.rst:41
msgid "NNI will do them for you."
msgstr ""
#: ../../source/hpo/overview.rst:44
msgid "Key Features of NNI HPO"
msgstr ""
#: ../../source/hpo/overview.rst:49
msgid "Tuning Algorithms"
msgstr ""
#: ../../source/hpo/overview.rst:51
msgid ""
"NNI provides *tuners* to speed up the process of finding best "
"hyperparameter set."
msgstr ""
#: ../../source/hpo/overview.rst:53
msgid ""
"A tuner, or a tuning algorithm, decides the order in which hyperparameter"
" sets are evaluated. Based on the results of historical hyperparameter "
"sets, an efficient tuner can predict where the best hyperparameters "
"locates around, and finds them in much fewer attempts."
msgstr ""
#: ../../source/hpo/overview.rst:57
msgid ""
"The naive example above evaluates all possible hyperparameter sets in "
"constant order, ignoring the historical results. This is the brute-force "
"tuning algorithm called *grid search*."
msgstr ""
#: ../../source/hpo/overview.rst:60
msgid ""
"NNI has out-of-the-box support for a variety of popular tuners. It "
"includes naive algorithms like random search and grid search, Bayesian-"
"based algorithms like TPE and SMAC, RL based algorithms like PPO, and "
"much more."
msgstr ""
#: ../../source/hpo/overview.rst:64
msgid "Main article: :doc:`tuners`"
msgstr ""
#: ../../source/hpo/overview.rst:69
msgid "Training Platforms"
msgstr ""
#: ../../source/hpo/overview.rst:71
msgid ""
"If you are not interested in distributed platforms, you can simply run "
"NNI HPO with current computer, just like any ordinary Python library."
msgstr ""
#: ../../source/hpo/overview.rst:74
msgid ""
"And when you want to leverage more computing resources, NNI provides "
"built-in integration for training platforms from simple on-premise "
"servers to scalable commercial clouds."
msgstr ""
#: ../../source/hpo/overview.rst:77
msgid ""
"With NNI you can write one piece of model code, and concurrently evaluate"
" hyperparameter sets on local machine, SSH servers, Kubernetes-based "
"clusters, AzureML service, and much more."
msgstr ""
#: ../../source/hpo/overview.rst:80
msgid "Main article: :doc:`/experiment/training_service/overview`"
msgstr ""
#: ../../source/hpo/overview.rst:85
msgid "Web Portal"
msgstr ""
#: ../../source/hpo/overview.rst:87
msgid ""
"NNI provides a web portal to monitor training progress, to visualize "
"hyperparameter performance, to manually customize hyperparameters, and to"
" manage multiple HPO experiments."
msgstr ""
#: ../../source/hpo/overview.rst:90
msgid "Main article: :doc:`/experiment/web_portal/web_portal`"
msgstr ""
#: ../../source/hpo/overview.rst:96
msgid "Tutorials"
msgstr ""
#: ../../source/hpo/overview.rst:98
msgid ""
"To start using NNI HPO, choose the quickstart tutorial of your favorite "
"framework:"
msgstr ""
#: ../../source/hpo/overview.rst:100
msgid ":doc:`PyTorch tutorial </tutorials/hpo_quickstart_pytorch/main>`"
msgstr ""
#: ../../source/hpo/overview.rst:101
msgid ":doc:`TensorFlow tutorial </tutorials/hpo_quickstart_tensorflow/main>`"
msgstr ""
#: ../../source/hpo/overview.rst:104
msgid "Extra Features"
msgstr ""
#: ../../source/hpo/overview.rst:106
msgid ""
"After you are familiar with basic usage, you can explore more HPO "
"features:"
msgstr ""
#: ../../source/hpo/overview.rst:108
msgid ""
":doc:`Use command line tool to create and manage experiments (nnictl) "
"</reference/nnictl>`"
msgstr ""
#: ../../source/hpo/overview.rst:110
msgid ":doc:`nnictl example </tutorials/hpo_nnictl/nnictl>`"
msgstr ""
#: ../../source/hpo/overview.rst:112
msgid ":doc:`Early stop non-optimal models (assessor) <assessors>`"
msgstr ""
#: ../../source/hpo/overview.rst:113
msgid ":doc:`TensorBoard integration </experiment/web_portal/tensorboard>`"
msgstr ""
#: ../../source/hpo/overview.rst:114
msgid ":doc:`Implement your own algorithm <custom_algorithm>`"
msgstr ""
#: ../../source/hpo/overview.rst:115
msgid ":doc:`Benchmark tuners <hpo_benchmark>`"
msgstr ""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2022, Microsoft
# This file is distributed under the same license as the NNI package.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2022.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: NNI \n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2022-04-20 05:50+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.9.1\n"
#: ../../source/index.rst:4 ../../source/index.rst:52
msgid "Get Started"
msgstr ""
#: ../../source/index.rst:12
msgid "Model Compression"
msgstr ""
#: ../../source/index.rst:12
msgid "User Guide"
msgstr ""
#: ../../source/index.rst:23
msgid "Python API"
msgstr ""
#: ../../source/index.rst:23
msgid "References"
msgstr ""
#: ../../source/index.rst:32
msgid "Misc"
msgstr ""
#: ../../source/index.rst:2
msgid "NNI Documentation"
msgstr ""
#: ../../source/index.rst:44
msgid ""
"**NNI (Neural Network Intelligence)** is a lightweight but powerful "
"toolkit to help users **automate**:"
msgstr ""
#: ../../source/index.rst:46
msgid ":doc:`Hyperparameter Optimization </hpo/overview>`"
msgstr ""
#: ../../source/index.rst:47
msgid ":doc:`Neural Architecture Search </nas/overview>`"
msgstr ""
#: ../../source/index.rst:48
msgid ":doc:`Model Compression </compression/overview>`"
msgstr ""
#: ../../source/index.rst:49
msgid ":doc:`Feature Engineering </feature_engineering/overview>`"
msgstr ""
#: ../../source/index.rst:54
msgid "To install the current release:"
msgstr ""
#: ../../source/index.rst:60
msgid ""
"See the :doc:`installation guide </installation>` if you need additional "
"help on installation."
msgstr ""
#: ../../source/index.rst:63
msgid "Try your first NNI experiment"
msgstr ""
#: ../../source/index.rst:69
msgid ""
"You need to have `PyTorch <https://pytorch.org/>`_ (as well as "
"`torchvision <https://pytorch.org/vision/stable/index.html>`_) installed "
"to run this experiment."
msgstr ""
#: ../../source/index.rst:71
msgid ""
"To start your journey now, please follow the :doc:`absolute quickstart of"
" NNI <quickstart>`!"
msgstr ""
#: ../../source/index.rst:74
msgid "Why choose NNI?"
msgstr ""
#: ../../source/index.rst:77
msgid "NNI makes AutoML techniques plug-and-play"
msgstr ""
#: ../../source/index.rst:221
msgid "NNI eases the effort to scale and manage AutoML experiments"
msgstr ""
#: ../../source/index.rst:229
msgid ""
"An AutoML experiment requires many trials to explore feasible and "
"potentially good-performing models. **Training service** aims to make the"
" tuning process easily scalable in a distributed platforms. It provides a"
" unified user experience for diverse computation resources (e.g., local "
"machine, remote servers, AKS). Currently, NNI supports **more than 9** "
"kinds of training services."
msgstr ""
#: ../../source/index.rst:240
msgid ""
"Web portal visualizes the tuning process, exposing the ability to "
"inspect, monitor and control the experiment."
msgstr ""
#: ../../source/index.rst:251
msgid ""
"The DNN model tuning often requires more than one experiment. Users might"
" try different tuning algorithms, fine-tune their search space, or switch"
" to another training service. **Experiment management** provides the "
"power to aggregate and compare tuning results from multiple experiments, "
"so that the tuning workflow becomes clean and organized."
msgstr ""
#: ../../source/index.rst:257
msgid "Get Support and Contribute Back"
msgstr ""
#: ../../source/index.rst:259
msgid ""
"NNI is maintained on the `NNI GitHub repository "
"<https://github.com/microsoft/nni>`_. We collect feedbacks and new "
"proposals/ideas on GitHub. You can:"
msgstr ""
#: ../../source/index.rst:261
msgid ""
"Open a `GitHub issue <https://github.com/microsoft/nni/issues>`_ for bugs"
" and feature requests."
msgstr ""
#: ../../source/index.rst:262
msgid ""
"Open a `pull request <https://github.com/microsoft/nni/pulls>`_ to "
"contribute code (make sure to read the :doc:`contribution guide "
"<notes/contributing>` before doing this)."
msgstr ""
#: ../../source/index.rst:263
msgid ""
"Participate in `NNI Discussion "
"<https://github.com/microsoft/nni/discussions>`_ for general questions "
"and new ideas."
msgstr ""
#: ../../source/index.rst:264
msgid "Join the following IM groups."
msgstr ""
#: ../../source/index.rst:270
msgid "Gitter"
msgstr ""
#: ../../source/index.rst:271
msgid "WeChat"
msgstr ""
#: ../../source/index.rst:278
msgid "Citing NNI"
msgstr ""
#: ../../source/index.rst:280
msgid ""
"If you use NNI in a scientific publication, please consider citing NNI in"
" your references."
msgstr ""
#: ../../source/index.rst:282
msgid ""
"Microsoft. Neural Network Intelligence (version |release|). "
"https://github.com/microsoft/nni"
msgstr ""
#: ../../source/index.rst:284
msgid ""
"Bibtex entry (please replace the version with the particular version you "
"are using): ::"
msgstr ""
#~ msgid "Hyperparameter Optimization"
#~ msgstr ""
#~ msgid "To run your first NNI experiment:"
#~ msgstr ""
#~ msgid ""
#~ "you need to have `PyTorch "
#~ "<https://pytorch.org/>`_ (as well as "
#~ "`torchvision <https://pytorch.org/vision/stable/index.html>`_)"
#~ " installed to run this experiment."
#~ msgstr ""
#~ msgid ""
#~ "Open a `pull request "
#~ "<https://github.com/microsoft/nni/pulls>`_ to contribute"
#~ " code (make sure to read the "
#~ "`contribution guide </contribution>` before "
#~ "doing this)."
#~ msgstr ""
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment