Unverified Commit 22ee2ac4 authored by liuzhe-lz's avatar liuzhe-lz Committed by GitHub
Browse files

HPO doc update (#4634)

parent 899a7959
Anneal Tuner
============
This simple annealing algorithm begins by sampling from the prior but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on random search that leverages smoothness in the response surface. The annealing rate is not adaptive.
Usage
-----
classArgs Requirements
^^^^^^^^^^^^^^^^^^^^^^
* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', the tuner will try to maximize metrics. If 'minimize', the tuner will try to minimize metrics.
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
tuner:
name: Anneal
classArgs:
optimize_mode: maximize
Batch Tuner
===========
Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type ``choice`` in the `search space spec <../Tutorial/SearchSpaceSpec.rst>`__.
Suggested scenario: If the configurations you want to try have been decided, you can list them in the SearchSpace file (using ``choice``) and run them using the batch tuner.
Usage
-----
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
# config.yml
tuner:
name: BatchTuner
Note that the search space for BatchTuner should look like:
.. code-block:: json
{
"combine_params":
{
"_type" : "choice",
"_value" : [{"optimizer": "Adam", "learning_rate": 0.00001},
{"optimizer": "Adam", "learning_rate": 0.0001},
{"optimizer": "Adam", "learning_rate": 0.001},
{"optimizer": "SGD", "learning_rate": 0.01},
{"optimizer": "SGD", "learning_rate": 0.005},
{"optimizer": "SGD", "learning_rate": 0.0002}]
}
}
The search space file should include the high-level key ``combine_params``. The type of params in the search space must be ``choice`` and the ``values`` must include all the combined params values.
Grid Search Tuner
=================
Grid Search performs an exhaustive search through a search space.
For uniform and normal distributed parameters, grid search tuner samples them at progressively decreased intervals.
Usage
-----
Grid search tuner has no argument.
Example Configuration
^^^^^^^^^^^^^^^^^^^^^
.. code-block:: yaml
tuner:
name: GridSearch
Medianstop Assessor on NNI
==========================
Median Stop
-----------
Medianstop is a simple early stopping rule mentioned in this `paper <https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf>`__. It stops a pending trial X after step S if the trial’s best objective value by step S is strictly worse than the median value of the running averages of all completed trials’ objectives reported up to step S.
Python API Reference of Auto Tune
=================================
.. contents::
Trial
-----
.. autofunction:: nni.get_next_parameter
.. autofunction:: nni.get_current_parameter
.. autofunction:: nni.report_intermediate_result
.. autofunction:: nni.report_final_result
.. autofunction:: nni.get_experiment_id
.. autofunction:: nni.get_trial_id
.. autofunction:: nni.get_sequence_id
Tuner
-----
.. autoclass:: nni.tuner.Tuner
:members:
.. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeTuner
:members:
.. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeArguments
.. autoclass:: nni.algorithms.hpo.random_tuner.RandomTuner
:members:
.. autoclass:: nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner
:members:
.. autoclass:: nni.algorithms.hpo.evolution_tuner.EvolutionTuner
:members:
.. autoclass:: nni.algorithms.hpo.smac_tuner.SMACTuner
:members:
.. autoclass:: nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner
:members:
.. autoclass:: nni.algorithms.hpo.networkmorphism_tuner.NetworkMorphismTuner
:members:
.. autoclass:: nni.algorithms.hpo.metis_tuner.MetisTuner
:members:
.. autoclass:: nni.algorithms.hpo.ppo_tuner.PPOTuner
:members:
.. autoclass:: nni.algorithms.hpo.batch_tuner.BatchTuner
:members:
.. autoclass:: nni.algorithms.hpo.gp_tuner.GPTuner
:members:
Assessor
--------
.. autoclass:: nni.assessor.Assessor
:members:
.. autoclass:: nni.assessor.AssessResult
:members:
.. autoclass:: nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor
:members:
.. autoclass:: nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor
:members:
Advisor
-------
.. autoclass:: nni.runtime.msg_dispatcher_base.MsgDispatcherBase
:members:
.. autoclass:: nni.algorithms.hpo.hyperband_advisor.Hyperband
:members:
.. autoclass:: nni.algorithms.hpo.bohb_advisor.BOHB
:members:
Utilities
---------
.. autofunction:: nni.utils.merge_parameter
.. autofunction:: nni.trace
.. autofunction:: nni.dump
.. autofunction:: nni.load
Assessor: Early Stopping Assessor: Early Stopping
======================== ========================
In order to save on computing resources, NNI supports an early stopping policy and has an interface called **Assessor** to do this job. In HPO, some hyperparameter sets may have obviously poor performance and it will be unnecessary to finish the evaluation.
This is called *early stopping*, and in NNI early stopping algorithms are called *assessors*.
Assessor receives the intermediate result from a trial and decides whether the trial should be killed using a specific algorithm. Once the trial experiment meets the early stopping conditions (which means Assessor is pessimistic about the final results), the assessor will kill the trial and the status of the trial will be `EARLY_STOPPED`. An assessor monitors *intermediate results* of each *trial*.
If a trial is predicted to produce suboptimal final result, the assessor will stop that trial immediately,
to save computing resources for other hyperparameter sets.
Here is an experimental result of MNIST after using the 'Curvefitting' Assessor in 'maximize' mode. You can see that Assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use Assessor, you may get better hyperparameters using the same computing resources. As introduced in quickstart tutorial, a trial is the evaluation process of a hyperparameter set,
and intermediate results are reported with ``nni.report_intermediate_result()`` API in trial code.
Typically, intermediate results are accuracy or loss metrics of each epoch.
(FIXME: links)
Implemented code directory: :githublink:`config_assessor.yml <examples/trials/mnist-pytorch/config_assessor.yml>` Using an assessor will increase the efficiency of computing resources,
but may slightly reduce the predicition accuracy of tuners.
It is recommended to use an assessor when computing resources are insufficient.
.. image:: ../../img/Assessor.png Common Usage
------------
.. toctree:: The usage of assessors are similar to tuners.
:maxdepth: 1
Overview<../Assessor/BuiltinAssessor> To use a built-in assessor you need to specify its name and arguments:
Medianstop<../Assessor/MedianstopAssessor>
Curvefitting<../Assessor/CurvefittingAssessor> .. code-block:: python
config.assessor.name = 'Medianstop'
config.tuner.class_args = {'optimize_mode': 'maximize'}
Built-in Assessors
------------------
.. list-table::
:header-rows: 1
:widths: auto
* - Assessor
- Brief Introduction of Algorithm
* - :class:`Medianstop <nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor>`
- It stops a pending trial X at step S if
the trial’s best objective value by step S is strictly worse than the median value of
the running averages of all completed trials’ objectives reported up to step S.
* - :class:`Curvefitting <nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor>`
- It stops a pending trial X at step S if
the trial’s forecast result at target step is convergence and lower than the best performance in the history.
...@@ -76,8 +76,8 @@ Kubernetes-based clusters, AzureML service, and much more. ...@@ -76,8 +76,8 @@ Kubernetes-based clusters, AzureML service, and much more.
Main article: (FIXME: link to training_services) Main article: (FIXME: link to training_services)
Web UI Web Portal
^^^^^^ ^^^^^^^^^^
NNI provides a web portal to monitor training progress, to visualize hyperparameter performance, NNI provides a web portal to monitor training progress, to visualize hyperparameter performance,
to manually customize hyperparameters, and to manage multiple HPO experiments. to manually customize hyperparameters, and to manage multiple HPO experiments.
...@@ -89,17 +89,107 @@ Tutorials ...@@ -89,17 +89,107 @@ Tutorials
To start using NNI HPO, choose the tutorial of your favorite framework: To start using NNI HPO, choose the tutorial of your favorite framework:
* PyTorch MNIST tutorial * PyTorch MNIST tutorial
* :doc:`TensorFlow MNIST tutorial </tutorials/hpo_quickstart_tensorflow/main>` * :doc:`TensorFlow MNIST tutorial </tutorials/hpo_quickstart_tensorflow/main>`
Extra Features Extra Features
-------------- --------------
After you are familiar with basic usage, you can explore more HPO features: After you are familiar with basic usage, you can explore more HPO features:
* :doc:`Assessor: Early stop non-optimal models <assessors>` * :doc:`Assessor: Early stop non-optimal models <assessors>`
* :doc:`nnictl: Use command line tool to create and manage experiments </reference/nnictl>` * :doc:`nnictl: Use command line tool to create and manage experiments </reference/nnictl>`
* :doc:`Custom tuner: Implement your own tuner <custom_algorithm>` * :doc:`Custom tuner: Implement your own tuner <custom_algorithm>`
* :doc:`Tensorboard support <tensorboard>` * :doc:`Tensorboard support <tensorboard>`
* :doc:`Tuner benchmark <hpo_benchmark>` * :doc:`Tuner benchmark <hpo_benchmark>`
* :doc:`NNI Annotation (legacy) <nni_annotation>` * :doc:`NNI Annotation (legacy) <nni_annotation>`
Built-in Algorithms
-------------------
Tuning Algorithms
^^^^^^^^^^^^^^^^^
Main article: :doc:`tuners`
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Category
- Brief Description
* - :class:`Random <nni.algorithms.hpo.random_tuner.RandomTuner>`
- Basic
- Naive random search.
* - :class:`GridSearch <nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>`
- Basic
- Brute-force search.
* - :class:`TPE <nni.algorithms.hpo.tpe_tuner.TpeTuner>`
- Bayesian
- Tree-structured Parzen Estimator.
* - :class:`Anneal <nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>`
- Classic
- Simulated annealing algorithm.
* - :class:`Evolution <nni.algorithms.hpo.evolution_tuner.EvolutionTuner>`
- Classic
- Naive evolution algorithm.
* - :class:`SMAC <nni.algorithms.hpo.smac_tuner.SMACTuner>`
- Bayesian
- Sequential Model-based optimization for general Algorithm Configuration.
* - :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`
- Advanced
- Evaluate more hyperparameter sets by adaptively allocating resources.
* - :class:`MetisTuner <nni.algorithms.hpo.metis_tuner.MetisTuner>`
- Bayesian
- Robustly optimizing tail latencies of cloud systems.
* - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>`
- Advanced
- Bayesian Optimization with HyperBand.
* - :class:`GPTuner <nni.algorithms.hpo.gp_tuner.GPTuner>`
- Bayesian
- Gaussian Process.
* - :class:`PBTTuner <nni.algorithms.hpo.pbt_tuner.PBTTuner>`
- Advanced
- Population Based Training of neural networks.
* - :class:`DNGOTuner <nni.algorithms.hpo.dngo_tuner.DNGOTuner>`
- Bayesian
- (FIXME: full name?)
* - :class:`PPOTuner <nni.algorithms.hpo.ppo_tuner.PPOTuner>`
- RL
- Proximal Policy Optimization.
* - :class:`BatchTuner <nni.algorithms.hpo.batch_tuner.BatchTuner>`
- Basic
- Manually specify hyperparameter sets.
Early Stopping
^^^^^^^^^^^^^^
Main article: :doc:`assessors`
.. list-table::
:header-rows: 1
:widths: auto
* - Name
- Brief Description
* - :class:`Medianstop <nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor>`
- Stop if the hyperparameter set performs worse than median at any step.
* - :class:`Curvefitting <nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor>`
- Stop if the learning curve will likely converge to suboptimal result.
...@@ -198,16 +198,16 @@ Search Space Types Supported by Each Tuner ...@@ -198,16 +198,16 @@ Search Space Types Supported by Each Tuner
- -
* - Grid Search Tuner * - Grid Search Tuner
- :raw-html:`&#10003;` - :raw-html:`&#10003;`
-
- :raw-html:`&#10003;` - :raw-html:`&#10003;`
-
- :raw-html:`&#10003;` - :raw-html:`&#10003;`
- - :raw-html:`&#10003;`
- - :raw-html:`&#10003;`
- - :raw-html:`&#10003;`
- - :raw-html:`&#10003;`
- - :raw-html:`&#10003;`
- - :raw-html:`&#10003;`
- :raw-html:`&#10003;`
- :raw-html:`&#10003;`
* - Hyperband Advisor * - Hyperband Advisor
- :raw-html:`&#10003;` - :raw-html:`&#10003;`
- -
......
...@@ -5,12 +5,12 @@ The tuner decides which hyperparameter sets will be evaluated. It is a most impo ...@@ -5,12 +5,12 @@ The tuner decides which hyperparameter sets will be evaluated. It is a most impo
A tuner works in following steps: A tuner works in following steps:
1. Initialize with a search space. 1. Initialize with a search space.
2. Generate hyperparameter sets from the search space. 2. Generate hyperparameter sets from the search space.
3. Send hyperparameters to trials. 3. Send hyperparameters to trials.
4. Receive evaluation results. 4. Receive evaluation results.
5. Update internal states according to the results. 5. Update internal states according to the results.
6. Go to step 2, until experiment end. 6. Go to step 2, until experiment end.
NNI has out-of-the-box support for many popular tuning algorithms. NNI has out-of-the-box support for many popular tuning algorithms.
They should be sufficient to cover most typical machine learning scenarios. They should be sufficient to cover most typical machine learning scenarios.
...@@ -39,56 +39,56 @@ For a general example, random tuner can be configured as follow: ...@@ -39,56 +39,56 @@ For a general example, random tuner can be configured as follow:
config.tuner.name = 'Random' config.tuner.name = 'Random'
config.tuner.class_args = {'seed': 0} config.tuner.class_args = {'seed': 0}
Full List Built-in Tuners
--------- ---------------
.. list-table:: .. list-table::
:header-rows: 1 :header-rows: 1
:widths: auto :widths: auto
* - Tuner * - Tuner
- Brief Introduction of Algorithm - Brief Introduction of Algorithm
* - `TPE <../autotune_ref.html#nni.algorithms.hpo.tpe_tuner.TpeTuner>`_ * - :class:`TPE <nni.algorithms.hpo.tpe_tuner.TpeTuner>`
- The Tree-structured Parzen Estimator (TPE) is a sequential model-based optimization (SMBO) approach. SMBO methods sequentially construct models to approximate the performance of hyperparameters based on historical measurements, and then subsequently choose new hyperparameters to test based on this model. `Reference Paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__ - The Tree-structured Parzen Estimator (TPE) is a sequential model-based optimization (SMBO) approach. SMBO methods sequentially construct models to approximate the performance of hyperparameters based on historical measurements, and then subsequently choose new hyperparameters to test based on this model. `Reference Paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__
* - `Random Search <../autotune_ref.html#nni.algorithms.hpo.random_tuner.RandomTuner>`_ * - :class:`Random Search <nni.algorithms.hpo.random_tuner.RandomTuner>`
- In Random Search for Hyper-Parameter Optimization show that Random Search might be surprisingly simple and effective. We suggest that we could use Random Search as the baseline when we have no knowledge about the prior distribution of hyper-parameters. `Reference Paper <http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf>`__ - In Random Search for Hyper-Parameter Optimization show that Random Search might be surprisingly simple and effective. We suggest that we could use Random Search as the baseline when we have no knowledge about the prior distribution of hyper-parameters. `Reference Paper <http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf>`__
* - `Anneal <../autotune_ref.html#nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>`_ * - :class:`Anneal <nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner>`
- This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive. - This simple annealing algorithm begins by sampling from the prior, but tends over time to sample from points closer and closer to the best ones observed. This algorithm is a simple variation on the random search that leverages smoothness in the response surface. The annealing rate is not adaptive.
* - `Naive Evolution <../autotune_ref.html#nni.algorithms.hpo.evolution_tuner.EvolutionTuner>`_ * - :class:`Evolution <nni.algorithms.hpo.evolution_tuner.EvolutionTuner>`
- Naive Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to work, but it's very simple and easy to expand new features. `Reference paper <https://arxiv.org/pdf/1703.01041.pdf>`__ - Naive Evolution comes from Large-Scale Evolution of Image Classifiers. It randomly initializes a population-based on search space. For each generation, it chooses better ones and does some mutation (e.g., change a hyperparameter, add/remove one layer) on them to get the next generation. Naïve Evolution requires many trials to work, but it's very simple and easy to expand new features. `Reference paper <https://arxiv.org/pdf/1703.01041.pdf>`__
* - `SMAC <../autotune_ref.html#nni.algorithms.hpo.smac_tuner.SMACTuner>`_ * - :class:`SMAC <nni.algorithms.hpo.smac_tuner.SMACTuner>`
- SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo. - SMAC is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO, in order to handle categorical parameters. The SMAC supported by NNI is a wrapper on the SMAC3 GitHub repo.
Notice, SMAC needs to be installed by ``pip install nni[SMAC]`` command. `Reference Paper, <https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf>`__ `GitHub Repo <https://github.com/automl/SMAC3>`__ Notice, SMAC needs to be installed by ``pip install nni[SMAC]`` command. `Reference Paper, <https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf>`__ `GitHub Repo <https://github.com/automl/SMAC3>`__
* - `Batch <../autotune_ref.html#nni.algorithms.hpo.batch_tuner.BatchTuner>`_ * - :class:`Batch <nni.algorithms.hpo.batch_tuner.BatchTuner>`
- Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec. - Batch tuner allows users to simply provide several configurations (i.e., choices of hyper-parameters) for their trial code. After finishing all the configurations, the experiment is done. Batch tuner only supports the type choice in search space spec.
* - `Grid Search <../autotune_ref.html#nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>`_ * - :class:`Grid Search <nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner>`
- Grid Search performs an exhaustive searching through the search space. - Grid Search performs an exhaustive searching through the search space.
* - `Hyperband <../autotune_ref.html#nni.algorithms.hpo.hyperband_advisor.Hyperband>`_ * - :class:`Hyperband <nni.algorithms.hpo.hyperband_advisor.Hyperband>`
- Hyperband tries to use limited resources to explore as many configurations as possible and returns the most promising ones as a final result. The basic idea is to generate many configurations and run them for a small number of trials. The half least-promising configurations are thrown out, the remaining are further trained along with a selection of new configurations. The size of these populations is sensitive to resource constraints (e.g. allotted search time). `Reference Paper <https://arxiv.org/pdf/1603.06560.pdf>`__ - Hyperband tries to use limited resources to explore as many configurations as possible and returns the most promising ones as a final result. The basic idea is to generate many configurations and run them for a small number of trials. The half least-promising configurations are thrown out, the remaining are further trained along with a selection of new configurations. The size of these populations is sensitive to resource constraints (e.g. allotted search time). `Reference Paper <https://arxiv.org/pdf/1603.06560.pdf>`__
* - `Metis <../autotune_ref.html#nni.algorithms.hpo.metis_tuner.MetisTuner>`_ * - :class:`Metis <nni.algorithms.hpo.metis_tuner.MetisTuner>`
- Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. `Reference Paper <https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/>`__ - Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. `Reference Paper <https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/>`__
* - `BOHB <../autotune_ref.html#nni.algorithms.hpo.bohb_advisor.BOHB>`_ * - :class:`BOHB <nni.algorithms.hpo.bohb_advisor.BOHB>`
- BOHB is a follow-up work to Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Bayesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. `Reference Paper <https://arxiv.org/abs/1807.01774>`__ - BOHB is a follow-up work to Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Bayesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. `Reference Paper <https://arxiv.org/abs/1807.01774>`__
* - `GP <../autotune_ref.html#nni.algorithms.hpo.gp_tuner.GPTuner>`_ * - :class:`GP <nni.algorithms.hpo.gp_tuner.GPTuner>`
- Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. `Reference Paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__, `Github Repo <https://github.com/fmfn/BayesianOptimization>`__ - Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. `Reference Paper <https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf>`__, `Github Repo <https://github.com/fmfn/BayesianOptimization>`__
* - `PBT <../autotune_ref.html>`_ * - :class:`PBT <nni.algorithms.hpo.pbt_tuner.PBTTuner>`
- PBT Tuner is a simple asynchronous optimization algorithm which effectively utilizes a fixed computational budget to jointly optimize a population of models and their hyperparameters to maximize performance. `Reference Paper <https://arxiv.org/abs/1711.09846v1>`__ - PBT Tuner is a simple asynchronous optimization algorithm which effectively utilizes a fixed computational budget to jointly optimize a population of models and their hyperparameters to maximize performance. `Reference Paper <https://arxiv.org/abs/1711.09846v1>`__
* - `DNGO <../autotune_ref.html>`_ * - :class:`DNGO <nni.algorithms.hpo.dngo_tuner.DNGOTuner>`
- Use of neural networks as an alternative to GPs to model distributions over functions in bayesian optimization. - Use of neural networks as an alternative to GPs to model distributions over functions in bayesian optimization.
Comparison Comparison
---------- ----------
......
...@@ -28,6 +28,7 @@ Neural Network Intelligence ...@@ -28,6 +28,7 @@ Neural Network Intelligence
nnictl Commands <reference/nnictl> nnictl Commands <reference/nnictl>
Experiment Configuration <reference/experiment_config> Experiment Configuration <reference/experiment_config>
HPO API Reference <reference/hpo>
Python API <reference/_modules/nni> Python API <reference/_modules/nni>
API Reference <reference/python_api_ref> API Reference <reference/python_api_ref>
...@@ -95,7 +96,7 @@ Then, please read :doc:`Quick start <Tutorial/QuickStart>` and :doc:`Tutorials < ...@@ -95,7 +96,7 @@ Then, please read :doc:`Quick start <Tutorial/QuickStart>` and :doc:`Tutorials <
.. codesnippetcard:: .. codesnippetcard::
:icon: ../img/thumbnails/hpo-icon-small.png :icon: ../img/thumbnails/hpo-icon-small.png
:title: Hyper-parameter Tuning :title: Hyper-parameter Tuning
:link: autotune_ref :link: tutorials/hpo_quickstart_tensorflow/main
.. code-block:: .. code-block::
......
.. fed79e0ecef07f9d28e06ad261fd8f7b .. 737612334d31e3c0ee8db7f53dc2944f
########################### ###########################
Neural Network Intelligence Neural Network Intelligence
...@@ -18,6 +18,7 @@ Neural Network Intelligence ...@@ -18,6 +18,7 @@ Neural Network Intelligence
模型压缩<compression/index> 模型压缩<compression/index>
特征工程<feature_engineering> 特征工程<feature_engineering>
NNI实验 <experiment/overview> NNI实验 <experiment/overview>
HPO API Reference <reference/hpo>
参考<reference> 参考<reference>
示例与解决方案<CommunitySharings/community_sharings> 示例与解决方案<CommunitySharings/community_sharings>
研究和出版物 <ResearchPublications> 研究和出版物 <ResearchPublications>
......
...@@ -10,7 +10,6 @@ References ...@@ -10,7 +10,6 @@ References
nnictl Commands <reference/nnictl> nnictl Commands <reference/nnictl>
Experiment Configuration <reference/experiment_config> Experiment Configuration <reference/experiment_config>
SDK API References <sdk_reference>
API References <reference/python_api_ref> API References <reference/python_api_ref>
Supported Framework Library <SupportedFramework_Library> Supported Framework Library <SupportedFramework_Library>
Launch from Python <Tutorial/HowToLaunchFromPython> Launch from Python <Tutorial/HowToLaunchFromPython>
HPO API Reference
=================
Trial APIs
----------
.. autofunction:: nni.get_current_parameter
.. autofunction:: nni.get_experiment_id
.. autofunction:: nni.get_next_parameter
.. autofunction:: nni.get_sequence_id
.. autofunction:: nni.get_trial_id
.. autofunction:: nni.report_final_result
.. autofunction:: nni.report_intermediate_result
Tuners
------
.. autoclass:: nni.algorithms.hpo.batch_tuner.BatchTuner
:members:
.. autoclass:: nni.algorithms.hpo.bohb_advisor.BOHB
:members:
.. autoclass:: nni.algorithms.hpo.dngo_tuner.DNGOTuner
:members:
.. autoclass:: nni.algorithms.hpo.evolution_tuner.EvolutionTuner
:members:
.. autoclass:: nni.algorithms.hpo.gp_tuner.GPTuner
:members:
.. autoclass:: nni.algorithms.hpo.gridsearch_tuner.GridSearchTuner
:members:
.. autoclass:: nni.algorithms.hpo.hyperband_advisor.Hyperband
:members:
.. autoclass:: nni.algorithms.hpo.hyperopt_tuner.HyperoptTuner
:members:
.. autoclass:: nni.algorithms.hpo.metis_tuner.MetisTuner
:members:
.. autoclass:: nni.algorithms.hpo.pbt_tuner.PBTTuner
:members:
.. autoclass:: nni.algorithms.hpo.ppo_tuner.PPOTuner
:members:
.. autoclass:: nni.algorithms.hpo.random_tuner.RandomTuner
:members:
.. autoclass:: nni.algorithms.hpo.smac_tuner.SMACTuner
:members:
.. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeTuner
:members:
.. autoclass:: nni.algorithms.hpo.tpe_tuner.TpeArguments
Assessors
---------
.. autoclass:: nni.algorithms.hpo.curvefitting_assessor.CurvefittingAssessor
:members:
.. autoclass:: nni.algorithms.hpo.medianstop_assessor.MedianstopAssessor
:members:
Customization
-------------
.. autoclass:: nni.assessor.AssessResult
:members:
.. autoclass:: nni.assessor.Assessor
:members:
.. autoclass:: nni.tuner.Tuner
:members:
.. a37f2fd05f05aaee4da775e155f4733d .. bcc89d271f64dcf7c00d79b9442933a9
:orphan: :orphan:
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
nnictl 命令 <reference/nnictl> nnictl 命令 <reference/nnictl>
Experiment 配置 <reference/experiment_config> Experiment 配置 <reference/experiment_config>
SDK API 参考 <sdk_reference>
API 参考 <reference/python_api_ref> API 参考 <reference/python_api_ref>
支持的框架和库 <SupportedFramework_Library> 支持的框架和库 <SupportedFramework_Library>
从 Python 发起实验 <Tutorial/HowToLaunchFromPython> 从 Python 发起实验 <Tutorial/HowToLaunchFromPython>
####################
Python API Reference
####################
.. toctree::
:maxdepth: 1
Auto Tune <autotune_ref>
Python API <Tutorial/HowToLaunchFromPython>
\ No newline at end of file
.. 577f3d11c9b75f47c5a100db2be97e8f
####################
Python API 参考
####################
.. toctree::
:maxdepth: 1
自动调优 <autotune_ref>
Python API <Tutorial/HowToLaunchFromPython>
\ No newline at end of file
...@@ -20,27 +20,64 @@ LOGGER = logging.getLogger('batch_tuner_AutoML') ...@@ -20,27 +20,64 @@ LOGGER = logging.getLogger('batch_tuner_AutoML')
class BatchTuner(Tuner): class BatchTuner(Tuner):
""" """
BatchTuner is tuner will running all the configure that user want to run batchly. Batch tuner is a special tuner that allows users to simply provide several hyperparameter sets,
and it will evaluate each set.
Batch tuner does **not** support standard search space.
Search space of batch tuner looks like a single ``choice`` in standard search space,
but it has different meaning.
Consider following search space:
.. code-block::
'combine_params': {
'_type': 'choice',
'_value': [
{'x': 0, 'y': 1},
{'x': 1, 'y': 2},
{'x': 1, 'y': 3},
]
}
Batch tuner will generate following 4 hyperparameter sets:
1. {'x': 0, 'y': 1}
2. {'x': 1, 'y': 2}
3. {'x': 1, 'y': 3}
If this search space was used with grid search tuner, it would instead generate:
1. {'combine_params': {'x': 0, 'y': 1 }}
2. {'combine_params': {'x': 1, 'y': 2 }}
3. {'combine_params': {'x': 1, 'y': 3 }}
Examples Examples
-------- --------
The search space only be accepted like:
:: .. code-block::
{'combine_params': config.search_space = {
{ '_type': 'choice', 'combine_params': {
'_value': '[{...}, {...}, {...}]', '_type': 'choice',
} '_value': [
{'optimizer': 'Adam', 'learning_rate': 0.001},
{'optimizer': 'Adam', 'learning_rate': 0.0001},
{'optimizer': 'Adam', 'learning_rate': 0.00001},
{'optimizer': 'SGD', 'learning_rate': 0.01},
{'optimizer': 'SGD', 'learning_rate': 0.005},
]
} }
}
config.tuner.name = 'BatchTuner'
""" """
def __init__(self): def __init__(self):
self._count = -1 self._count = -1
self._values = [] self._values = []
def is_valid(self, search_space): def _is_valid(self, search_space):
""" """
Check the search space is valid: only contains 'choice' type Check the search space is valid: only contains 'choice' type
...@@ -70,27 +107,10 @@ class BatchTuner(Tuner): ...@@ -70,27 +107,10 @@ class BatchTuner(Tuner):
return None return None
def update_search_space(self, search_space): def update_search_space(self, search_space):
"""Update the search space
Parameters
----------
search_space : dict
"""
validate_search_space(search_space, ['choice']) validate_search_space(search_space, ['choice'])
self._values = self.is_valid(search_space) self._values = self._is_valid(search_space)
def generate_parameters(self, parameter_id, **kwargs): def generate_parameters(self, parameter_id, **kwargs):
"""Returns a dict of trial (hyper-)parameters, as a serializable object.
Parameters
----------
parameter_id : int
Returns
-------
dict
A candidate parameter group.
"""
self._count += 1 self._count += 1
if self._count > len(self._values) - 1: if self._count > len(self._values) - 1:
raise nni.NoMoreTrialError('no more parameters now.') raise nni.NoMoreTrialError('no more parameters now.')
...@@ -100,13 +120,6 @@ class BatchTuner(Tuner): ...@@ -100,13 +120,6 @@ class BatchTuner(Tuner):
pass pass
def import_data(self, data): def import_data(self, data):
"""Import additional data for tuning
Parameters
----------
data:
a list of dictionarys, each of which has at least two keys, 'parameter' and 'value'
"""
if not self._values: if not self._values:
LOGGER.info("Search space has not been initialized, skip this data import") LOGGER.info("Search space has not been initialized, skip this data import")
return return
......
...@@ -22,10 +22,27 @@ class CurvefittingClassArgsValidator(ClassArgsValidator): ...@@ -22,10 +22,27 @@ class CurvefittingClassArgsValidator(ClassArgsValidator):
}).validate(kwargs) }).validate(kwargs)
class CurvefittingAssessor(Assessor): class CurvefittingAssessor(Assessor):
"""CurvefittingAssessor uses learning curve fitting algorithm to predict the learning curve performance in the future. """
CurvefittingAssessor uses learning curve fitting algorithm to predict the learning curve performance in the future.
The intermediate result **must** be accuracy.
It stops a pending trial X at step S if the trial's forecast result at target step is convergence and lower than the It stops a pending trial X at step S if the trial's forecast result at target step is convergence and lower than the
best performance in the history. best performance in the history.
Examples
--------
.. code-block::
config.assessor.name = 'Curvefitting'
config.tuner.class_args = {
'epoch_num': 20,
'start_step': 6,
'threshold': 9,
'gap': 1,
}
Parameters Parameters
---------- ----------
epoch_num : int epoch_num : int
...@@ -34,6 +51,7 @@ class CurvefittingAssessor(Assessor): ...@@ -34,6 +51,7 @@ class CurvefittingAssessor(Assessor):
only after receiving start_step number of reported intermediate results only after receiving start_step number of reported intermediate results
threshold : float threshold : float
The threshold that we decide to early stop the worse performance curve. The threshold that we decide to early stop the worse performance curve.
gap : int
""" """
def __init__(self, epoch_num=20, start_step=6, threshold=0.95, gap=1): def __init__(self, epoch_num=20, start_step=6, threshold=0.95, gap=1):
...@@ -56,15 +74,6 @@ class CurvefittingAssessor(Assessor): ...@@ -56,15 +74,6 @@ class CurvefittingAssessor(Assessor):
logger.info('Successfully initials the curvefitting assessor') logger.info('Successfully initials the curvefitting assessor')
def trial_end(self, trial_job_id, success): def trial_end(self, trial_job_id, success):
"""update the best performance of completed trial job
Parameters
----------
trial_job_id : int
trial job id
success : bool
True if succssfully finish the experiment, False otherwise
"""
if success: if success:
if self.set_best_performance: if self.set_best_performance:
self.completed_best_performance = max(self.completed_best_performance, self.trial_history[-1]) self.completed_best_performance = max(self.completed_best_performance, self.trial_history[-1])
...@@ -76,25 +85,6 @@ class CurvefittingAssessor(Assessor): ...@@ -76,25 +85,6 @@ class CurvefittingAssessor(Assessor):
logger.info('No need to update, trial job id: %s', trial_job_id) logger.info('No need to update, trial job id: %s', trial_job_id)
def assess_trial(self, trial_job_id, trial_history): def assess_trial(self, trial_job_id, trial_history):
"""assess whether a trial should be early stop by curve fitting algorithm
Parameters
----------
trial_job_id : int
trial job id
trial_history : list
The history performance matrix of each trial
Returns
-------
bool
AssessResult.Good or AssessResult.Bad
Raises
------
Exception
unrecognize exception in curvefitting_assessor
"""
scalar_trial_history = extract_scalar_history(trial_history) scalar_trial_history = extract_scalar_history(trial_history)
self.trial_history = scalar_trial_history self.trial_history = scalar_trial_history
if not self.set_best_performance: if not self.set_best_performance:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment