Unverified Commit f5b89bb6 authored by J-shang's avatar J-shang Committed by GitHub
Browse files

Merge pull request #4776 from microsoft/v2.7

parents 7aa44612 1546962f
HPO Benchmarks
==============
.. toctree::
:hidden:
HPO Benchmark Example Statistics <hpo_benchmark_stats>
We provide a benchmarking tool to compare the performances of tuners provided by NNI (and users' custom tuners) on different
types of tasks. This tool uses the `automlbenchmark repository <https://github.com/openml/automlbenchmark)>`_ to run different *benchmarks* on the NNI *tuners*.
types of tasks. This tool uses the `automlbenchmark repository <https://github.com/openml/automlbenchmark>`_ to run different *benchmarks* on the NNI *tuners*.
The tool is located in ``examples/trials/benchmarking/automlbenchmark``. This document provides a brief introduction to the tool, its usage, and currently available benchmarks.
Overview and Terminologies
......
......@@ -34,7 +34,7 @@ In NNI, there are mainly four types of annotation:
**Arguments**
* **sampling_algo**\ : Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an ``nni.`` identification and a search space type specified in `SearchSpaceSpec <SearchSpaceSpec.rst>`__ such as ``choice`` or ``uniform``.
* **sampling_algo**\ : Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an ``nni.`` identification and a search space type specified in :doc:`SearchSpaceSpec <search_space>` such as ``choice`` or ``uniform``.
* **name**\ : The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.
There are 10 types to express your search space as follows:
......@@ -93,11 +93,11 @@ An example here is:
``'''@nni.report_intermediate_result(metrics)'''``
``@nni.report_intermediate_result`` is used to report intermediate result, whose usage is the same as ``nni.report_intermediate_result`` in the doc of `Write a trial run on NNI <../TrialExample/Trials.rst>`__
``@nni.report_intermediate_result`` is used to report intermediate result, whose usage is the same as :func:`nni.report_intermediate_result`.
4. Annotate final result
^^^^^^^^^^^^^^^^^^^^^^^^
``'''@nni.report_final_result(metrics)'''``
``@nni.report_final_result`` is used to report the final result of the current trial, whose usage is the same as ``nni.report_final_result`` in the doc of `Write a trial run on NNI <../TrialExample/Trials.rst>`__
``@nni.report_final_result`` is used to report the final result of the current trial, whose usage is the same as :func:`nni.report_final_result`.
......@@ -106,6 +106,9 @@ Extra Features
After you are familiar with basic usage, you can explore more HPO features:
* :doc:`Use command line tool to create and manage experiments (nnictl) </reference/nnictl>`
* :doc:`nnictl example </tutorials/hpo_nnictl/nnictl>`
* :doc:`Early stop non-optimal models (assessor) <assessors>`
* :doc:`TensorBoard integration </experiment/web_portal/tensorboard>`
* :doc:`Implement your own algorithm <custom_algorithm>`
......
.. 317442fd7a0540c0776a08ad773566cf
.. c74f6d072f5f8fa93eadd214bba992b4
超参调优
========
自动超参调优(hyperparameter optimization, HPO)是NNI的主要功能之一。
自动超参调优(hyperparameter optimization, HPO)是NNI的主要功能之一。
超参调优简介
------------
......@@ -36,24 +36,24 @@
2. :ref:`利用分布式平台进行训练 <zh-hpo-overview-platforms>`
3. :ref:`使用网页控制台来监控调参过程 <zh-hpo-overview-portal>`
NNI可以满足您的这些需求。
NNI可以满足您的这些需求。
NNI超参调优的主要功能
---------------------
NNI超参调优的主要功能
----------------------
.. _zh-hpo-overview-tuners:
调优算法
^^^^^^^^
NNI通过调优算法来更快地找到最优超参组合,这些算法被称为“tuner”(调参器)。
NNI通过调优算法来更快地找到最优超参组合,这些算法被称为“tuner”(调参器)。
调优算法会决定需要运行、评估哪些超参组合,以及应该以何种顺序评估超参组合。
高效的算法可以通过已评估超参组合的结果去预测最优超参的取值,从而减少找到最优超参所需的评估次数。
开头的示例以固定顺序评估所有可能的超参组合,无视了超参的评估结果,这种朴素方法被称为“grid search”(网格搜索)。
NNI内建了很多流行的调优算法,包括朴素算法如随机搜索、网格搜索,贝叶斯优化类算法如TPESMAC,强化学习算法如PPO等等。
NNI内建了很多流行的调优算法,包括朴素算法如随机搜索、网格搜索,贝叶斯优化类算法如TPESMAC,强化学习算法如PPO等等。
完整内容: :doc:`tuners`
......@@ -62,9 +62,9 @@ NNI内建了很多流行的调优算法,包括朴素算法如随机搜索、
训练平台
^^^^^^^^
如果您不准备使用分布式训练平台,您可以像使用普通Python函数库一样,在自己的电脑上直接运行NNI超参调优。
如果您不准备使用分布式训练平台,您可以像使用普通Python函数库一样,在自己的电脑上直接运行NNI超参调优。
如果想利用更多计算资源加速调优过程,您也可以使用NNI内建的训练平台集成,从简单的SSH服务器到可扩容的Kubernetes集群NNI都提供支持。
如果想利用更多计算资源加速调优过程,您也可以使用NNI内建的训练平台集成,从简单的SSH服务器到可扩容的Kubernetes集群NNI都提供支持。
完整内容: :doc:`/experiment/training_service/overview`
......@@ -73,7 +73,7 @@ NNI内建了很多流行的调优算法,包括朴素算法如随机搜索、
网页控制台
^^^^^^^^^^
您可以使用NNI的网页控制台来监控超参调优实验,它支持实时显示实验进度、对超参性能进行可视化、人工修改超参数值、同时管理多个实验等诸多功能。
您可以使用NNI的网页控制台来监控超参调优实验,它支持实时显示实验进度、对超参性能进行可视化、人工修改超参数值、同时管理多个实验等诸多功能。
完整内容: :doc:`/experiment/web_portal/web_portal`
......@@ -83,17 +83,20 @@ NNI内建了很多流行的调优算法,包括朴素算法如随机搜索、
教程
----
我们提供了以下教程帮助您上手NNI超参调优,您可以选择最熟悉的机器学习框架:
我们提供了以下教程帮助您上手NNI超参调优,您可以选择最熟悉的机器学习框架:
* :doc:`使用PyTorch的超参调优教程 </tutorials/hpo_quickstart_pytorch/main>`
* :doc:`使用TensorFlow的超参调优教程 </tutorials/hpo_quickstart_tensorflow/main>`
* :doc:`使用TensorFlow的超参调优教程(英文) </tutorials/hpo_quickstart_tensorflow/main>`
更多功能
--------
在掌握了NNI超参调优的基础用法之后,您可以尝试以下更多功能:
在掌握了NNI超参调优的基础用法之后,您可以尝试以下更多功能:
* :doc:`Use command line tool to create and manage experiments (nnictl) </reference/nnictl>`
* :doc:`nnictl example </tutorials/hpo_nnictl/nnictl>`
* :doc:`Early stop non-optimal models (assessor) <assessors>`
* :doc:`TensorBoard integration </experiment/web_portal/tensorboard>`
* :doc:`Implement your own algorithm <custom_algorithm>`
......
Quickstart
==========
.. toctree::
PyTorch </tutorials/hpo_quickstart_pytorch/main>
TensorFlow </tutorials/hpo_quickstart_tensorflow/main>
......@@ -275,17 +275,17 @@ Search Space Types Supported by Each Tuner
-
* - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>`
- choice
- choice(nested)
- randint
- uniform
- quniform
- loguniform
- qloguniform
- normal
- qnormal
- lognormal
- qlognormal
-
-
-
-
-
-
-
-
-
-
-
* - :class:`GP <nni.algorithms.hpo.gp_tuner.GPTuner>`
- ✓
......@@ -301,17 +301,17 @@ Search Space Types Supported by Each Tuner
-
* - :class:`PBT <nni.algorithms.hpo.pbt_tuner.PBTTuner>`
- choice
- choice(nested)
- randint
- uniform
- quniform
- loguniform
- qloguniform
- normal
- qnormal
- lognormal
- qlognormal
-
-
-
-
-
-
-
-
-
-
-
* - :class:`DNGO <nni.algorithms.hpo.dngo_tuner.DNGOTuner>`
- ✓
......
Hyperparameter Optimization
===========================
.. toctree::
:maxdepth: 2
.. toctree::
:hidden:
Overview <overview>
Tutorial </tutorials/hpo_quickstart_pytorch/main>
quickstart
Search Space <search_space>
Tuners <tuners>
Assessors <assessors>
Advanced Usage <advanced_toctree.rst>
advanced_usage
.. 21e9c3e0f6b182cf42a99a7f6c4ecf98
超参调优
========
.. toctree::
:hidden:
概述 <overview>
教程 <quickstart>
搜索空间 <search_space>
Tuners <tuners>
Assessors <assessors>
高级用法 <advanced_usage>
......@@ -14,7 +14,7 @@ NNI Documentation
:caption: User Guide
:hidden:
Hyperparameter Optimization <hpo/index>
hpo/toctree
nas/toctree
Model Compression <compression/toctree>
feature_engineering/toctree
......@@ -62,13 +62,11 @@ See the :doc:`installation guide </installation>` if you need additional help on
Try your first NNI experiment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To run your first NNI experiment:
.. code-block:: shell
$ nnictl hello
.. note:: you need to have `PyTorch <https://pytorch.org/>`_ (as well as `torchvision <https://pytorch.org/vision/stable/index.html>`_) installed to run this experiment.
.. note:: You need to have `PyTorch <https://pytorch.org/>`_ (as well as `torchvision <https://pytorch.org/vision/stable/index.html>`_) installed to run this experiment.
To start your journey now, please follow the :doc:`absolute quickstart of NNI <quickstart>`!
......@@ -84,7 +82,7 @@ NNI makes AutoML techniques plug-and-play
.. codesnippetcard::
:icon: ../img/thumbnails/hpo-small.svg
:title: Hyper-parameter Tuning
:title: Hyperparameter Tuning
:link: tutorials/hpo_quickstart_pytorch/main
.. code-block::
......@@ -130,7 +128,7 @@ NNI makes AutoML techniques plug-and-play
.. codesnippetcard::
:icon: ../img/thumbnails/quantization-small.svg
:title: Quantization
:link: tutorials/quantization_speedup
:link: tutorials/quantization_quick_start_mnist
.. code-block::
......@@ -261,7 +259,7 @@ Get Support and Contribute Back
NNI is maintained on the `NNI GitHub repository <https://github.com/microsoft/nni>`_. We collect feedbacks and new proposals/ideas on GitHub. You can:
* Open a `GitHub issue <https://github.com/microsoft/nni/issues>`_ for bugs and feature requests.
* Open a `pull request <https://github.com/microsoft/nni/pulls>`_ to contribute code (make sure to read the `contribution guide </contribution>` before doing this).
* Open a `pull request <https://github.com/microsoft/nni/pulls>`_ to contribute code (make sure to read the :doc:`contribution guide <notes/contributing>` before doing this).
* Participate in `NNI Discussion <https://github.com/microsoft/nni/discussions>`_ for general questions and new ideas.
* Join the following IM groups.
......
This diff is collapsed.
......@@ -9,7 +9,7 @@ Execution engine is for running Retiarii Experiment. NNI supports three executio
* **CGO execution engine** has the same requirements and capabilities as the **Graph-based execution engine**. But further enables cross-model optimizations, which makes model space exploration faster.
.. _pure-python-exeuction-engine:
.. _pure-python-execution-engine:
Pure-python Execution Engine
----------------------------
......@@ -20,7 +20,7 @@ Rememeber to add :meth:`nni.retiarii.model_wrapper` decorator outside the whole
.. note:: You should always use ``super().__init__()`` instead of ``super(MyNetwork, self).__init__()`` in the PyTorch model, because the latter one has issues with model wrapper.
.. _graph-based-exeuction-engine:
.. _graph-based-execution-engine:
Graph-based Execution Engine
----------------------------
......
......@@ -22,7 +22,7 @@ In this figure:
* *Exploration strategy* is the algorithm that is used to explore a model search space. Sometimes we also call it *search strategy*.
* *Model evaluator* is responsible for training a model and evaluating its performance.
The process is similar to :doc:`Hyperparameter Optimization </hpo/index>`, except that the target is the best architecture rather than hyperparameter. Concretely, an exploration strategy selects an architecture from a predefined search space. The architecture is passed to a performance evaluation to get a score, which represents how well this architecture performs on a particular task. This process is repeated until the search process is able to find the best architecture.
The process is similar to :doc:`Hyperparameter Optimization </hpo/overview>`, except that the target is the best architecture rather than hyperparameter. Concretely, an exploration strategy selects an architecture from a predefined search space. The architecture is passed to a performance evaluation to get a score, which represents how well this architecture performs on a particular task. This process is repeated until the search process is able to find the best architecture.
Key Features
------------
......@@ -43,7 +43,7 @@ Search Space Design
The search space defines which architectures can be represented in principle. Incorporating prior knowledge about typical properties of architectures well-suited for a task can reduce the size of the search space and simplify the search. However, this also introduces a human bias, which may prevent finding novel architectural building blocks that go beyond the current human knowledge. Search space design can be very challenging for beginners, who might not possess the experience to balance the richness and simplicity.
In NNI, we provide a wide range of APIs to build the search space. There are :doc:`high-level APIs <construct_space>`, that enables incorporating human knowledge about what makes a good architecture or search space. There are also :doc:`low-level APIs <mutator>`, that is a list of primitives to construct a network from operator to operator.
In NNI, we provide a wide range of APIs to build the search space. There are :doc:`high-level APIs <construct_space>`, that enables the possibility to incorporate human knowledge about what makes a good architecture or search space. There are also :doc:`low-level APIs <mutator>`, that is a list of primitives to construct a network from operation to operation.
Exploration strategy
^^^^^^^^^^^^^^^^^^^^
......@@ -57,7 +57,7 @@ Performance estimation
The objective of NAS is typically to find architectures that achieve high predictive performance on unseen data. Performance estimation refers to the process of estimating this performance. The problem with performance estimation is mostly its scalability, i.e., how can I run and manage multiple trials simultaneously.
In NNI, we standardize this process is implemented with :doc:`evaluator <evaluator>`, which is responsible of estimating a model's performance. The choices of evaluators also range from the simplest option, e.g., to perform a standard training and validation of the architecture on data, to complex configurations and implementations. Evaluators are run in *trials*, where trials can be spawn onto distributed platforms with our powerful :doc:`training service </experiment/training_service/overview>`.
In NNI, we standardize this process is implemented with :doc:`evaluator <evaluator>`, which is responsible of estimating a model's performance. NNI has quite a few built-in supports of evaluators, ranging from the simplest option, e.g., to perform a standard training and validation of the architecture on data, to complex configurations and implementations. Evaluators are run in *trials*, where trials can be spawn onto distributed platforms with our powerful :doc:`training service </experiment/training_service/overview>`.
Tutorials
---------
......
.. 48c39585a539a877461aadef63078c48
神经架构搜索
===========================
.. toctree::
:hidden:
快速入门 </tutorials/hello_nas>
构建搜索空间 <construct_space>
探索策略 <exploration_strategy>
评估器 <evaluator>
高级用法 <advanced_usage>
.. attention:: NNI 最新的架构搜索支持都是基于 Retiarii 框架,还在使用 `NNI 架构搜索的早期版本 <https://nni.readthedocs.io/en/v2.2/nas.html>`__ 的用户应尽快将您的工作迁移到 Retiarii。我们计划在接下来的几个版本中删除旧的架构搜索框架。
.. attention:: PyTorch 是 **Retiarii 唯一支持的框架**。有关 Tensorflow 上架构搜索支持的需求在 `此讨论 <https://github.com/microsoft/nni/discussions/4605>`__ 中。另外,如果您打算使用 PyTorch 和 Tensorflow 以外的 DL 框架运行 NAS,请 `创建新 issue <https://github.com/microsoft/nni/issues>`__ 让我们知道。
概述
------
自动神经架构搜索 (Neural Architecture Search, NAS)在寻找更好的模型方面发挥着越来越重要的作用。最近的研究证明了自动架构搜索的可行性,并导致模型击败了许多手动设计和调整的模型。其中具有代表性的有 `NASNet <https://arxiv.org/abs/1707.07012>`__、 `ENAS <https://arxiv.org/abs/1802.03268>`__、 `DARTS <https://arxiv.org/ abs/1806.09055>`__、 `Network Morphism <https://arxiv.org/abs/1806.10282>`__ 和 `进化算法 <https://arxiv.org/abs/1703.01041>`__。此外,新的创新正不断涌现。
总的来说,使用神经架构搜索解决任何特定任务通常需要:搜索空间设计、搜索策略选择和性能评估。这三个组件形成如下的循环(图来自于 `架构搜索综述 <https://arxiv.org/abs/1808.05377>`__):
.. image:: ../../img/nas_abstract_illustration.png
:align: center
:width: 700
在这个图中:
* *模型搜索空间* 是指一组模型,从中探索/搜索最佳模型,简称为 *搜索空间* 或 *模型空间*。
* *探索策略* 是用于探索模型搜索空间的算法。有时我们也称它为 *搜索策略*。
* *模型评估者* 负责训练模型并评估其性能。
该过程类似于 :doc:`超参数优化 </hpo/overview>`,只不过目标是最佳网络结构而不是最优超参数。具体来说,探索策略从预定义的搜索空间中选择架构。该架构被传递给性能评估以获得评分,该评分表示这个网络结构在特定任务上的表现。重复此过程,直到搜索过程能够找到最优的网络结构。
主要特点
------------
NNI 中当前的架构搜索框架由 `Retiarii: A Deep Learning Exploratory-Training Framework <https://www.usenix.org/system/files/osdi20-zhang_quanlu.pdf>`__ 的研究支撑,具有以下特点:
* :doc:`简单的 API,让您轻松构建搜索空间 <construct_space>`
* :doc:`SOTA 架构搜索算法,以高效探索搜索空间 <exploration_strategy>`
* :doc:`后端支持,在大规模 AI 平台上运行实验 </experiment/overview>`
为什么使用 NNI 的架构搜索
-------------------------------
若没有 NNI,实现架构搜索将极具挑战性,主要包含以下三个方面。当用户想在自己的场景中尝试架构搜索技术时,NNI 提供的解决方案可以极大程度上减轻用户的工作量。
搜索空间设计
^^^^^^^^^^^^^^^^^^^
搜索空间定义了架构的可行域集合。为了简化搜索,我们通常需要结合任务相关的先验知识,减小搜索空间的规模。然而,这也引入了人类的偏见,在某种程度上可能会丧失突破人类认知的可能性。无论如何,对于初学者来说,搜索空间设计是一个极具挑战性的任务,因为他们可能无法在简单的空间和丰富的想象力之间取得平衡。
在 NNI 中,我们提供了不同层级的 API 来构建搜索空间。有 :doc:`高层 API <construct_space>`,引入大量先验,帮助用户迅速了解什么是好的架构或搜索空间;也有 :doc:`底层 API <mutator>`,提供了最底层的算子和图变换原语。
探索策略
^^^^^^^^^^^^^^^^^^^^
探索策略定义了如何探索搜索空间(通常是指数级规模的)。它包含经典的探索-利用权衡。一方面,我们希望快速找到性能良好的架构;而另一方面,我们也应避免过早收敛到次优架构的区域。我们往往需要通常通过反复试验找到特定场景的“最佳”探索策略。由于许多近期发表的探索策略都是使用自己的代码库实现的,因此从一个切换到另一个变得非常麻烦。
在 NNI 中,我们还提供了 :doc:`一系列的探索策略 <exploration_strategy>`。其中一些功能强大但耗时,而另一些可能不能找到最优架构但非常高效。鉴于所有策略都使用统一的用户接口实现,用户可以轻松找到符合他们需求的策略。
性能评估
^^^^^^^^^^^^^^^^^^^^^^
架构搜索的目标通常是找到能够在测试数据集表现理想的网络结构。性能评估的作用便是量化每个网络的好坏。其主要难点在于可扩展性,即如何在大规模训练平台上同时运行和管理多个试验。
在 NNI 中,我们使用 :doc:`evaluator <evaluator>` 来标准化性能评估流程。它负责估计模型的性能。NNI 内建了不少性能评估器,从最简单的交叉验证,到复杂的自定义配置。评估器在 *试验 (trials)* 中运行,可以通过我们强大的 :doc:`训练平台 </experiment/training_service/overview>` 将试验分发到大规模训练平台上。
教程
---------
要开始使用 NNI 架构搜索框架,我们建议至少阅读以下教程:
* :doc:`快速入门 </tutorials/hello_nas>`
* :doc:`构建搜索空间 <construct_space>`
* :doc:`探索策略 <exploration_strategy>`
* :doc:`评估器 <evaluator>`
资源
---------
以下文章将有助于更好地了解 NAS 的最新发展:
* `神经架构搜索:综述 <https://arxiv.org/abs/1808.05377>`__
* `神经架构搜索的综述:挑战和解决方案 <https://arxiv.org/abs/2006.02903>`__
.. ccd00e2e56b44cf452b0afb81e8cecff
快速入门
==========
.. cardlinkitem::
:header: 超参调优快速入门(以 PyTorch 框架为例)
:description: 使用超参数调优 (HPO) 为一个 PyTorch FashionMNIST 模型调参.
:link: tutorials/hpo_quickstart_pytorch/main
:image: ../img/thumbnails/hpo-pytorch.svg
:background: purple
.. cardlinkitem::
:header: 神经架构搜索快速入门
:description: 为初学者讲解如何使用 NNI 在 MNIST 数据集上搜索一个网络结构。
:link: tutorials/hello_nas
:image: ../img/thumbnails/nas-tutorial.svg
:background: cyan
.. cardlinkitem::
:header: 模型压缩快速入门
:description: 学习剪枝以压缩您的模型。
:link: tutorials/pruning_quick_start_mnist
:image: ../img/thumbnails/pruning-tutorial.svg
:background: blue
......@@ -20,11 +20,6 @@ A config file is needed when creating an experiment. This document describes the
4. Setting a field to ``None`` or ``null`` is equivalent to not setting the field.
.. contents:: Contents
:local:
:depth: 3
Examples
========
......@@ -120,13 +115,13 @@ ExperimentConfig
* - searchSpaceFile
- ``str``, optional
- Path_ to the JSON file containing the search space.
Search space format is determined by tuner. The common format for built-in tuners is documented `here <../Tutorial/SearchSpaceSpec.rst>`__.
Search space format is determined by tuner. The common format for built-in tuners is documented :doc:`here </hpo/search_space>`.
Mutually exclusive to ``searchSpace``.
* - searchSpace
- ``JSON``, optional
- Search space object.
The format is determined by tuner. Common format for built-in tuners is documented `here <../Tutorial/SearchSpaceSpec.rst>`__.
The format is determined by tuner. Common format for built-in tuners is documented :doc:`here </hpo/search_space>`.
Note that ``None`` means "no such field" so empty search space should be written as ``{}``.
Mutually exclusive to ``searchSpaceFile``.
......@@ -151,7 +146,7 @@ ExperimentConfig
- ``int`` or ``None``, optional
- Default: None. This field might have slightly different meanings for various training services,
especially when set to ``0`` or ``None``.
See `training service's document <../training_services.rst>`__ for details.
See :doc:`training service's document </experiment/training_service/overview>` for details.
In local mode, setting the field to ``0`` will prevent trials from accessing GPU (by empty ``CUDA_VISIBLE_DEVICES``).
And when set to ``None``, trials will be created and scheduled as if they did not use GPU,
......@@ -183,7 +178,7 @@ ExperimentConfig
* - useAnnotation
- ``bool``, optional
- Default: ``False``. Enable `annotation <../Tutorial/AnnotationSpec.rst>`__.
- Default: ``False``. Enable :doc:`annotation </hpo/nni_annotation>`.
When using annotation, ``searchSpace`` and ``searchSpaceFile`` should not be specified manually.
* - debug
......@@ -215,25 +210,25 @@ ExperimentConfig
* - tuner
- ``AlgorithmConfig``, optional
- Specify the tuner.
The built-in tuners can be found `here <../builtin_tuner.rst>`__ and you can follow `this tutorial <../Tuner/CustomizeTuner.rst>`__ to customize a new tuner.
The built-in tuners can be found :doc:`here </hpo/tuners>` and you can follow :doc:`this tutorial </hpo/custom_algorithm>` to customize a new tuner.
* - assessor
- ``AlgorithmConfig``, optional
- Specify the assessor.
The built-in assessors can be found `here <../builtin_assessor.rst>`__ and you can follow `this tutorial <../Assessor/CustomizeAssessor.rst>`__ to customize a new assessor.
The built-in assessors can be found :doc:`here </hpo/assessors>` and you can follow :doc:`this tutorial </hpo/custom_algorithm>` to customize a new assessor.
* - advisor
- ``AlgorithmConfig``, optional
- Specify the advisor.
NNI provides two built-in advisors: `BOHB <../Tuner/BohbAdvisor.rst>`__ and `Hyperband <../Tuner/HyperbandAdvisor.rst>`__, and you can follow `this tutorial <../Tuner/CustomizeAdvisor.rst>`__ to customize a new advisor.
NNI provides two built-in advisors: :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>` and :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`.
* - trainingService
- ``TrainingServiceConfig``
- Specify the `training service <../TrainingService/Overview.rst>`__.
- Specify the :doc:`training service </experiment/training_service/overview>`.
* - sharedStorage
- ``SharedStorageConfig``, optional
- Configure the shared storage, detailed usage can be found `here <../Tutorial/HowToUseSharedStorage.rst>`__.
- Configure the shared storage, detailed usage can be found :doc:`here </experiment/training_service/shared_storage>`.
AlgorithmConfig
^^^^^^^^^^^^^^^
......@@ -286,8 +281,8 @@ One of the following:
- `AmlConfig`_
- `DlcConfig`_
- `HybridConfig`_
For `Kubeflow <../TrainingService/KubeflowMode.rst>`_, `FrameworkController <../TrainingService/FrameworkControllerMode.rst>`_, and `AdaptDL <../TrainingService/AdaptDLMode.rst>`_ training platforms, it is suggested to use `v1 config schema <../Tutorial/ExperimentConfig.rst>`_ for now.
- :doc:`FrameworkControllerConfig </experiment/training_service/frameworkcontroller>`
- :doc:`KubeflowConfig </experiment/training_service/kubeflow>`
.. _reference-local-config-label:
......@@ -357,7 +352,7 @@ Detailed usage can be found :doc:`/experiment/training_service/remote`.
* - reuseMode
- ``bool``, optional
- Default: ``True``. Enable `reuse mode <../TrainingService/Overview.rst#training-service-under-reuse-mode>`__.
- Default: ``True``. Enable :ref:`reuse mode <training-service-reuse>`.
RemoteMachineConfig
"""""""""""""""""""
......@@ -437,7 +432,7 @@ RemoteMachineConfig
OpenpaiConfig
-------------
Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/openpai>`.
.. list-table::
:widths: 10 10 80
......@@ -495,7 +490,7 @@ Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
* - reuseMode
- ``bool``, optional
- Default: ``True``. Enable `reuse mode <../TrainingService/Overview.rst#training-service-under-reuse-mode>`__.
- Default: ``True``. Enable :ref:`reuse mode <training-service-reuse>`.
* - openpaiConfig
- ``JSON``, optional
......@@ -509,7 +504,7 @@ Detailed usage can be found `here <../TrainingService/PaiMode.rst>`__.
AmlConfig
---------
Detailed usage can be found `here <../TrainingService/AMLMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/aml>`.
.. list-table::
:widths: 10 10 80
......@@ -546,7 +541,7 @@ Detailed usage can be found `here <../TrainingService/AMLMode.rst>`__.
DlcConfig
---------
Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__.
Detailed usage can be found :doc:`here </experiment/training_service/paidlc>`.
.. list-table::
:widths: 10 10 80
......@@ -611,7 +606,9 @@ Detailed usage can be found `here <../TrainingService/DlcMode.rst>`__.
HybridConfig
------------
Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `AmlConfig`_ . Detailed usage can be found `here <../TrainingService/HybridMode.rst>`__.
Currently only support `LocalConfig`_, `RemoteConfig`_, `OpenpaiConfig`_ and `AmlConfig`_ . Detailed usage can be found :doc:`here </experiment/training_service/hybrid>`.
.. _reference-sharedstorage-config-label:
SharedStorageConfig
^^^^^^^^^^^^^^^^^^^
......
......@@ -5,6 +5,61 @@
Change Log
==========
Release 2.7 - 4/18/2022
-----------------------
Documentation
^^^^^^^^^^^^^
A full-size upgrade of the documentation, with the following significant improvements in the reading experience, practical tutorials, and examples:
* Reorganized the document structure with a new document template. (`Upgraded doc entry <https://nni.readthedocs.io/en/v2.7>`__)
* Add more friendly tutorials with jupyter notebook. (`New Quick Starts <https://nni.readthedocs.io/en/v2.7/quickstart.html>`__)
* New model pruning demo available. (`Youtube entry <https://www.youtube.com/channel/UCKcafm6861B2mnYhPbZHavw>`__, `Bilibili entry <https://space.bilibili.com/1649051673>`__)
Hyper-Parameter Optimization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* [Improvement] TPE and random tuners will not generate duplicate hyperparameters anymore.
* [Improvement] Most Python APIs now have type annotations.
Neural Architecture Search
^^^^^^^^^^^^^^^^^^^^^^^^^^
* Jointly search for architecture and hyper-parameters: ValueChoice in evaluator. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#valuechoice>`__)
* Support composition (transformation) of one or several value choices. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#valuechoice>`__)
* Enhanced Cell API (``merge_op``, preprocessor, postprocessor). (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#cell>`__)
* The argument ``depth`` in the ``Repeat`` API allows ValueChoice. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#repeat>`__)
* Support loading ``state_dict`` between sub-net and super-net. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/others.html#nni.retiarii.utils.original_state_dict_hooks>`__, `example in spos <https://nni.readthedocs.io/en/v2.7/reference/nas/strategy.html#spos>`__)
* Support BN fine-tuning and evaluation in SPOS example. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/strategy.html#spos>`__)
* *Experimental* Model hyper-parameter choice. (`doc <https://nni.readthedocs.io/en/v2.7/reference/nas/search_space.html#modelparameterchoice>`__)
* *Preview* Lightning implementation for Retiarii including DARTS, ENAS, ProxylessNAS and RandomNAS. (`example usage <https://github.com/microsoft/nni/blob/v2.7/test/ut/retiarii/test_oneshot.py>`__)
* *Preview* A search space hub that contains 10 search spaces. (`code <https://github.com/microsoft/nni/tree/v2.7/nni/retiarii/hub>`__)
Model Compression
^^^^^^^^^^^^^^^^^
* Pruning V2 is promoted as default pruning framework, old pruning is legacy and keeps for a few releases.(`doc <https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html>`__)
* A new pruning mode ``balance`` is supported in ``LevelPruner``.(`doc <https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html#level-pruner>`__)
* Support coarse-grained pruning in ``ADMMPruner``.(`doc <https://nni.readthedocs.io/en/v2.7/reference/compression/pruner.html#admm-pruner>`__)
* [Improvement] Support more operation types in pruning speedup.
* [Improvement] Optimize performance of some pruners.
Experiment
^^^^^^^^^^
* [Improvement] Experiment.run() no longer stops web portal on return.
Notable Bugfixes
^^^^^^^^^^^^^^^^
* Fixed: experiment list could not open experiment with prefix.
* Fixed: serializer for complex kinds of arguments.
* Fixed: some typos in code. (thanks @a1trl9 @mrshu)
* Fixed: dependency issue across layer in pruning speedup.
* Fixed: uncheck trial doesn't work bug in the detail table.
* Fixed: filter name | id bug in the experiment management page.
Release 2.6 - 1/19/2022
-----------------------
......
......@@ -15,7 +15,7 @@ Instructions
#. Run ``git clone https://github.com/ultmaster/EfficientNet-PyTorch`` to clone the `ultmaster modified version <https://github.com/ultmaster/EfficientNet-PyTorch>`__ of the original `EfficientNet-PyTorch <https://github.com/lukemelas/EfficientNet-PyTorch>`__. The modifications were done to adhere to the original `Tensorflow version <https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet>`__ as close as possible (including EMA, label smoothing and etc.); also added are the part which gets parameters from tuner and reports intermediate/final results. Clone it into ``EfficientNet-PyTorch``\ ; the files like ``main.py``\ , ``train_imagenet.sh`` will appear inside, as specified in the configuration files.
#. Run ``nnictl create --config config_local.yml`` (use ``config_pai.yml`` for OpenPAI) to find the best EfficientNet-B1. Adjust the training service (PAI/local/remote), batch size in the config files according to the environment.
For training on ImageNet, read ``EfficientNet-PyTorch/train_imagenet.sh``. Download ImageNet beforehand and extract it adhering to `PyTorch format <https://pytorch.org/docs/stable/torchvision/datasets.html#imagenet>`__ and then replace ``/mnt/data/imagenet`` in with the location of the ImageNet storage. This file should also be a good example to follow for mounting ImageNet into the container on OpenPAI.
For training on ImageNet, read ``EfficientNet-PyTorch/train_imagenet.sh``. Download ImageNet beforehand and extract it adhering to `PyTorch format <https://pytorch.org/vision/stable/generated/torchvision.datasets.ImageNet.html>`__ and then replace ``/mnt/data/imagenet`` in with the location of the ImageNet storage. This file should also be a good example to follow for mounting ImageNet into the container on OpenPAI.
Results
-------
......
......@@ -5,18 +5,7 @@ Hyper Parameter Optimization Comparison
Comparison of Hyperparameter Optimization (HPO) algorithms on several problems.
Hyperparameter Optimization algorithms are list below:
* `Random Search <../Tuner/BuiltinTuner.rst>`__
* `Grid Search <../Tuner/BuiltinTuner.rst>`__
* `Evolution <../Tuner/BuiltinTuner.rst>`__
* `Anneal <../Tuner/BuiltinTuner.rst>`__
* `Metis <../Tuner/BuiltinTuner.rst>`__
* `TPE <../Tuner/BuiltinTuner.rst>`__
* `SMAC <../Tuner/BuiltinTuner.rst>`__
* `HyperBand <../Tuner/BuiltinTuner.rst>`__
* `BOHB <../Tuner/BuiltinTuner.rst>`__
Hyperparameter Optimization algorithms are listed in :doc:`/hpo/tuners`.
All algorithms run in NNI local environment.
......@@ -39,7 +28,7 @@ AutoGBDT Example
Problem Description
^^^^^^^^^^^^^^^^^^^
Nonconvex problem on the hyper-parameter search of `AutoGBDT <../TrialExample/GbdtExample.rst>`__ example.
Nonconvex problem on the hyper-parameter search of :githublink:`AutoGBDT example <examples/trials/auto-gbdt>`.
Search Space
^^^^^^^^^^^^
......
......@@ -35,7 +35,7 @@ The experiments are performed with the following pruners/datasets/models:
For the pruners with scheduling, ``L1Filter Pruner`` is used as the base algorithm. That is to say, after the sparsities distribution is decided by the scheduling algorithm, ``L1Filter Pruner`` is used to performn real pruning.
*
All the pruners listed above are implemented in :githublink:`nni <docs/en_US/Compression/Overview.rst>`.
All the pruners listed above are implemented in :doc:`nni </compression/overview>`.
Experiment Result
-----------------
......@@ -88,15 +88,12 @@ Implementation Details
^^^^^^^^^^^^^^^^^^^^^^
*
The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.
* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.
*
Both FLOPs and the number of parameters are counted with :githublink:`Model FLOPs/Parameters Counter <docs/en_US/Compression/CompressionUtils.md#model-flopsparameters-counter>` after :githublink:`model speedup <docs/en_US/Compression/ModelSpeedup.rst>`.
* Both FLOPs and the number of parameters are counted with :ref:`Model FLOPs/Parameters Counter <flops-counter>` after :doc:`model speedup </tutorials/pruning_speedup>`.
This avoids potential issues of counting them of masked models.
*
The experiment code can be found :githublink:`here <examples/model_compress/pruning/legacy/auto_pruners_torch.py>`.
* The experiment code can be found :githublink:`here <examples/model_compress/pruning/legacy/auto_pruners_torch.py>`.
Experiment Result Rendering
^^^^^^^^^^^^^^^^^^^^^^^^^^^
......
......@@ -41,7 +41,7 @@ How to Open NNI's Web UI on Google Colab
! curl -s http://localhost:4040/api/tunnels # don't change the port number 4040
You will see an url like http://xxxx.ngrok.io after step 4, open this url and you will find NNI's Web UI. Have fun :)
You will see an url like ``http://xxxx.ngrok.io`` after step 4, open this url and you will find NNI's Web UI. Have fun :)
Access Web UI with frp
----------------------
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment